2025-09-09T14:05:24.6364798Z Current runner version: '2.328.0' 2025-09-09T14:05:24.6371071Z Runner name: 'i-0220e75f15255c02c' 2025-09-09T14:05:24.6371876Z Runner group name: 'default' 2025-09-09T14:05:24.6372759Z Machine name: 'ip-10-0-15-139' 2025-09-09T14:05:24.6375574Z ##[group]GITHUB_TOKEN Permissions 2025-09-09T14:05:24.6377858Z Contents: read 2025-09-09T14:05:24.6378571Z Metadata: read 2025-09-09T14:05:24.6379101Z ##[endgroup] 2025-09-09T14:05:24.6381319Z Secret source: Actions 2025-09-09T14:05:24.6382134Z Prepare workflow directory 2025-09-09T14:05:24.6955781Z Prepare all required actions 2025-09-09T14:05:24.6996667Z Getting action download info 2025-09-09T14:05:25.0197647Z Download action repository 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683' (SHA:11bd71901bbe5b1630ceea73d27597364c9af683) 2025-09-09T14:05:25.3132166Z Download action repository 'pytorch/pytorch@main' (SHA:4dd73e659a8fd4872e5f49cfd72e420fa7c4e6c9) 2025-09-09T14:05:38.9778694Z Download action repository 'actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093' (SHA:d3f86a106a0bac45b974a628896c90dbdf5c8093) 2025-09-09T14:05:39.3117104Z Download action repository 'pmeier/pytest-results-action@a2c1430e2bddadbad9f49a6f9b879f062c6b19b1' (SHA:a2c1430e2bddadbad9f49a6f9b879f062c6b19b1) 2025-09-09T14:05:39.5535996Z Download action repository 'actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-09-09T14:05:40.0520819Z Getting action download info 2025-09-09T14:05:40.2474084Z Uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@refs/heads/main (e502b6d9079a2a411c68046e8a7694b851c5df33) 2025-09-09T14:05:40.2480519Z ##[group] Inputs 2025-09-09T14:05:40.2483542Z script: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:40.2487354Z timeout: 180 2025-09-09T14:05:40.2487774Z runner: linux.4xlarge 2025-09-09T14:05:40.2488229Z upload-artifact: 2025-09-09T14:05:40.2489089Z upload-artifact-to-s3: false 2025-09-09T14:05:40.2489610Z download-artifact: 2025-09-09T14:05:40.2490047Z repository: 2025-09-09T14:05:40.2490476Z fetch-depth: 1 2025-09-09T14:05:40.2490884Z submodules: recursive 2025-09-09T14:05:40.2491294Z ref: 2025-09-09T14:05:40.2491753Z test-infra-repository: pytorch/test-infra 2025-09-09T14:05:40.2492353Z test-infra-ref: 2025-09-09T14:05:40.2492834Z use-custom-docker-registry: true 2025-09-09T14:05:40.2493425Z docker-image: pytorch/almalinux-builder 2025-09-09T14:05:40.2494044Z docker-build-dir: .ci/docker 2025-09-09T14:05:40.2494565Z gpu-arch-type: cpu 2025-09-09T14:05:40.2494988Z gpu-arch-version: 2025-09-09T14:05:40.2495409Z job-name: linux-job 2025-09-09T14:05:40.2495859Z continue-on-error: false 2025-09-09T14:05:40.2496377Z binary-matrix: 2025-09-09T14:05:40.2496813Z run-with-docker: true 2025-09-09T14:05:40.2497257Z secrets-env: 2025-09-09T14:05:40.2497699Z no-sudo: false 2025-09-09T14:05:40.2498127Z ##[endgroup] 2025-09-09T14:05:40.2499254Z Complete job name: test-nightly (CPU Nightly, linux.4xlarge, --pre torch --index-url https://download.pytorch.org/wh... / linux-job 2025-09-09T14:05:40.2988860Z A job started hook has been configured by the self-hosted runner administrator 2025-09-09T14:05:40.3099825Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2025-09-09T14:05:40.3109077Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:40.3109747Z ##[endgroup] 2025-09-09T14:05:41.8750409Z Runner Type: linux.4xlarge 2025-09-09T14:05:41.8750910Z Instance Type: c5.4xlarge 2025-09-09T14:05:41.8751189Z AMI Name: unknown 2025-09-09T14:05:41.8777259Z AMI ID: ami-05ffe3c48a9991133 2025-09-09T14:05:47.3260763Z ##[group]Run set -euxo pipefail 2025-09-09T14:05:47.3261183Z set -euxo pipefail 2025-09-09T14:05:47.3261694Z if [[ "${NO_SUDO}" == "false" ]]; then 2025-09-09T14:05:47.3262082Z  echo "::group::Cleanup with-sudo debug output" 2025-09-09T14:05:47.3262484Z  sudo rm -rfv "${GITHUB_WORKSPACE}" 2025-09-09T14:05:47.3262797Z else 2025-09-09T14:05:47.3263077Z  echo "::group::Cleanup no-sudo debug output" 2025-09-09T14:05:47.3263454Z  rm -rfv "${GITHUB_WORKSPACE}" 2025-09-09T14:05:47.3263759Z fi 2025-09-09T14:05:47.3263980Z  2025-09-09T14:05:47.3264207Z mkdir -p "${GITHUB_WORKSPACE}" 2025-09-09T14:05:47.3264547Z echo "::endgroup::" 2025-09-09T14:05:47.3273693Z shell: /usr/bin/bash -e {0} 2025-09-09T14:05:47.3273992Z env: 2025-09-09T14:05:47.3274238Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:47.3274587Z REPOSITORY: pytorch/ao 2025-09-09T14:05:47.3275009Z PR_NUMBER: 2963 2025-09-09T14:05:47.3276555Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:47.3278177Z NO_SUDO: false 2025-09-09T14:05:47.3278411Z ##[endgroup] 2025-09-09T14:05:47.3305625Z + [[ false == \f\a\l\s\e ]] 2025-09-09T14:05:47.3320577Z ##[group]Cleanup with-sudo debug output 2025-09-09T14:05:47.3324016Z + echo '::group::Cleanup with-sudo debug output' 2025-09-09T14:05:47.3324477Z + sudo rm -rfv /home/ec2-user/actions-runner/_work/ao/ao 2025-09-09T14:05:47.4460308Z removed directory '/home/ec2-user/actions-runner/_work/ao/ao' 2025-09-09T14:05:47.4475762Z + mkdir -p /home/ec2-user/actions-runner/_work/ao/ao 2025-09-09T14:05:47.4487333Z + echo ::endgroup:: 2025-09-09T14:05:47.4488661Z ##[endgroup] 2025-09-09T14:05:47.4619173Z ##[group]Run actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 2025-09-09T14:05:47.4619664Z with: 2025-09-09T14:05:47.4619894Z repository: pytorch/test-infra 2025-09-09T14:05:47.4620203Z path: test-infra 2025-09-09T14:05:47.4620436Z submodules: recursive 2025-09-09T14:05:47.4620887Z token: *** 2025-09-09T14:05:47.4621104Z ssh-strict: true 2025-09-09T14:05:47.4621343Z ssh-user: git 2025-09-09T14:05:47.4621582Z persist-credentials: true 2025-09-09T14:05:47.4621857Z clean: true 2025-09-09T14:05:47.4622102Z sparse-checkout-cone-mode: true 2025-09-09T14:05:47.4622391Z fetch-depth: 1 2025-09-09T14:05:47.4622626Z fetch-tags: false 2025-09-09T14:05:47.4622853Z show-progress: true 2025-09-09T14:05:47.4623097Z lfs: false 2025-09-09T14:05:47.4623313Z set-safe-directory: true 2025-09-09T14:05:47.4623604Z env: 2025-09-09T14:05:47.4623838Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:47.4624187Z REPOSITORY: pytorch/ao 2025-09-09T14:05:47.4624487Z PR_NUMBER: 2963 2025-09-09T14:05:47.4626000Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:47.4627626Z ##[endgroup] 2025-09-09T14:05:47.5753760Z Syncing repository: pytorch/test-infra 2025-09-09T14:05:47.5754599Z ##[group]Getting Git version info 2025-09-09T14:05:47.5755205Z Working directory is '/home/ec2-user/actions-runner/_work/ao/ao/test-infra' 2025-09-09T14:05:47.5755899Z [command]/usr/bin/git version 2025-09-09T14:05:47.5756198Z git version 2.47.1 2025-09-09T14:05:47.5767553Z ##[endgroup] 2025-09-09T14:05:47.5788011Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/1d609c36-1a80-4936-9165-8b97d163c396' before making global git config changes 2025-09-09T14:05:47.5789044Z Adding repository directory to the temporary git global config as a safe directory 2025-09-09T14:05:47.5793135Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/ao/ao/test-infra 2025-09-09T14:05:47.5821405Z ##[group]Initializing the repository 2025-09-09T14:05:47.5826084Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/ao/ao/test-infra 2025-09-09T14:05:47.5855479Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-09-09T14:05:47.5856132Z hint: is subject to change. To configure the initial branch name to use in all 2025-09-09T14:05:47.5856734Z hint: of your new repositories, which will suppress this warning, call: 2025-09-09T14:05:47.5857184Z hint: 2025-09-09T14:05:47.5857494Z hint: git config --global init.defaultBranch 2025-09-09T14:05:47.5857851Z hint: 2025-09-09T14:05:47.5858195Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-09-09T14:05:47.5858833Z hint: 'development'. The just-created branch can be renamed via this command: 2025-09-09T14:05:47.5859285Z hint: 2025-09-09T14:05:47.5859530Z hint: git branch -m 2025-09-09T14:05:47.5860073Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/ao/ao/test-infra/.git/ 2025-09-09T14:05:47.5865888Z [command]/usr/bin/git remote add origin https://github.com/pytorch/test-infra 2025-09-09T14:05:47.5890558Z ##[endgroup] 2025-09-09T14:05:47.5891030Z ##[group]Disabling automatic garbage collection 2025-09-09T14:05:47.5894539Z [command]/usr/bin/git config --local gc.auto 0 2025-09-09T14:05:47.5918805Z ##[endgroup] 2025-09-09T14:05:47.5919245Z ##[group]Setting up auth 2025-09-09T14:05:47.5924534Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-09-09T14:05:47.5950447Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-09-09T14:05:47.6228775Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-09-09T14:05:47.6255086Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-09-09T14:05:47.6512602Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-09T14:05:47.6553073Z ##[endgroup] 2025-09-09T14:05:47.6553939Z ##[group]Determining the default branch 2025-09-09T14:05:47.6557256Z Retrieving the default branch name 2025-09-09T14:05:47.9221238Z Default branch 'main' 2025-09-09T14:05:47.9221986Z ##[endgroup] 2025-09-09T14:05:47.9222436Z ##[group]Fetching the repository 2025-09-09T14:05:47.9227455Z [command]/usr/bin/git -c protocol.version=2 fetch --no-tags --prune --no-recurse-submodules --depth=1 origin +refs/heads/main:refs/remotes/origin/main 2025-09-09T14:05:48.3381380Z From https://github.com/pytorch/test-infra 2025-09-09T14:05:48.3381857Z * [new branch] main -> origin/main 2025-09-09T14:05:48.3402262Z ##[endgroup] 2025-09-09T14:05:48.3402662Z ##[group]Determining the checkout info 2025-09-09T14:05:48.3403735Z ##[endgroup] 2025-09-09T14:05:48.3408191Z [command]/usr/bin/git sparse-checkout disable 2025-09-09T14:05:48.3440588Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-09-09T14:05:48.3464053Z ##[group]Checking out the ref 2025-09-09T14:05:48.3467743Z [command]/usr/bin/git checkout --progress --force -B main refs/remotes/origin/main 2025-09-09T14:05:48.4597306Z Switched to a new branch 'main' 2025-09-09T14:05:48.4598490Z branch 'main' set up to track 'origin/main'. 2025-09-09T14:05:48.4605565Z ##[endgroup] 2025-09-09T14:05:48.4606001Z ##[group]Setting up auth for fetching submodules 2025-09-09T14:05:48.4611558Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-09T14:05:48.4650392Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-09-09T14:05:48.4678043Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-09-09T14:05:48.4706024Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-09-09T14:05:48.4732000Z ##[endgroup] 2025-09-09T14:05:48.4732400Z ##[group]Fetching submodules 2025-09-09T14:05:48.4737527Z [command]/usr/bin/git submodule sync --recursive 2025-09-09T14:05:48.5000575Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --depth=1 --recursive 2025-09-09T14:05:48.5262940Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-09-09T14:05:48.5518956Z ##[endgroup] 2025-09-09T14:05:48.5519431Z ##[group]Persisting credentials for submodules 2025-09-09T14:05:48.5526463Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-09-09T14:05:48.5787917Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-09-09T14:05:48.6044872Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-09-09T14:05:48.6304695Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-09-09T14:05:48.6564170Z ##[endgroup] 2025-09-09T14:05:48.6601965Z [command]/usr/bin/git log -1 --format=%H 2025-09-09T14:05:48.6623488Z e502b6d9079a2a411c68046e8a7694b851c5df33 2025-09-09T14:05:48.6826638Z Prepare all required actions 2025-09-09T14:05:48.6827176Z Getting action download info 2025-09-09T14:05:48.8440245Z Download action repository 'pytorch/test-infra@main' (SHA:e502b6d9079a2a411c68046e8a7694b851c5df33) 2025-09-09T14:05:50.8449778Z Getting action download info 2025-09-09T14:05:51.0299482Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2025-09-09T14:05:51.2495348Z ##[group]Run ./test-infra/.github/actions/setup-linux 2025-09-09T14:05:51.2495705Z env: 2025-09-09T14:05:51.2495958Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:51.2496287Z REPOSITORY: pytorch/ao 2025-09-09T14:05:51.2496554Z PR_NUMBER: 2963 2025-09-09T14:05:51.2498098Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:51.2499655Z ##[endgroup] 2025-09-09T14:05:51.2585032Z ##[group]Run set -euo pipefail 2025-09-09T14:05:51.2585388Z set -euo pipefail 2025-09-09T14:05:51.2585688Z function get_ec2_metadata() { 2025-09-09T14:05:51.2586054Z  # Pulled from instance metadata endpoint for EC2 2025-09-09T14:05:51.2586697Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2025-09-09T14:05:51.2587259Z  category=$1 2025-09-09T14:05:51.2588165Z  curl -H "X-aws-ec2-metadata-token: $(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")" -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2025-09-09T14:05:51.2589098Z } 2025-09-09T14:05:51.2589350Z echo "ami-id: $(get_ec2_metadata ami-id)" 2025-09-09T14:05:51.2589796Z echo "instance-id: $(get_ec2_metadata instance-id)" 2025-09-09T14:05:51.2590270Z echo "instance-type: $(get_ec2_metadata instance-type)" 2025-09-09T14:05:51.2590696Z echo "system info $(uname -a)" 2025-09-09T14:05:51.2596878Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:51.2597265Z env: 2025-09-09T14:05:51.2597504Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:51.2597854Z REPOSITORY: pytorch/ao 2025-09-09T14:05:51.2598110Z PR_NUMBER: 2963 2025-09-09T14:05:51.2599608Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:51.2601162Z ##[endgroup] 2025-09-09T14:05:51.2740187Z ami-id: ami-05ffe3c48a9991133 2025-09-09T14:05:51.2840113Z instance-id: i-0220e75f15255c02c 2025-09-09T14:05:51.2935923Z instance-type: c5.4xlarge 2025-09-09T14:05:51.2945518Z system info Linux ip-10-0-15-139.ec2.internal 6.1.141-155.222.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 17 10:29:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 2025-09-09T14:05:51.2985781Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-09T14:05:51.2986747Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-09T14:05:51.2992669Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:51.2993046Z env: 2025-09-09T14:05:51.2993282Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:51.2993624Z REPOSITORY: pytorch/ao 2025-09-09T14:05:51.2993887Z PR_NUMBER: 2963 2025-09-09T14:05:51.2995685Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:51.2997261Z ##[endgroup] 2025-09-09T14:05:51.3071697Z ##[group]Run if systemctl is-active --quiet docker; then 2025-09-09T14:05:51.3072148Z if systemctl is-active --quiet docker; then 2025-09-09T14:05:51.3072521Z  echo "Docker daemon is running..."; 2025-09-09T14:05:51.3072849Z else 2025-09-09T14:05:51.3073188Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2025-09-09T14:05:51.3073611Z fi 2025-09-09T14:05:51.3078818Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:51.3079194Z env: 2025-09-09T14:05:51.3079441Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:51.3079786Z REPOSITORY: pytorch/ao 2025-09-09T14:05:51.3080049Z PR_NUMBER: 2963 2025-09-09T14:05:51.3081521Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:51.3083078Z ##[endgroup] 2025-09-09T14:05:51.3152705Z Docker daemon is running... 2025-09-09T14:05:51.3358387Z ##[group]Run AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2025-09-09T14:05:51.3359054Z AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") 2025-09-09T14:05:51.3359582Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2025-09-09T14:05:51.3360217Z retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ 2025-09-09T14:05:51.3360957Z  --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" 2025-09-09T14:05:51.3366592Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:51.3367115Z env: 2025-09-09T14:05:51.3367351Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:51.3367696Z REPOSITORY: pytorch/ao 2025-09-09T14:05:51.3367938Z PR_NUMBER: 2963 2025-09-09T14:05:51.3369438Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:51.3370999Z AWS_RETRY_MODE: standard 2025-09-09T14:05:51.3371308Z AWS_MAX_ATTEMPTS: 5 2025-09-09T14:05:51.3371552Z AWS_DEFAULT_REGION: us-east-1 2025-09-09T14:05:51.3371826Z ##[endgroup] 2025-09-09T14:05:52.3504448Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-09T14:05:52.3505094Z Configure a credential helper to remove this warning. See 2025-09-09T14:05:52.3505837Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-09T14:05:52.3506359Z 2025-09-09T14:05:52.3506458Z Login Succeeded 2025-09-09T14:05:52.3552604Z ##[group]Run env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-09-09T14:05:52.3553223Z env | grep '^GITHUB' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-09-09T14:05:52.3553726Z env | grep '^CI' >> "${RUNNER_TEMP}/github_env_${GITHUB_RUN_ID}" 2025-09-09T14:05:52.3559708Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:52.3560079Z env: 2025-09-09T14:05:52.3560332Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:52.3560664Z REPOSITORY: pytorch/ao 2025-09-09T14:05:52.3560927Z PR_NUMBER: 2963 2025-09-09T14:05:52.3562717Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:52.3596455Z ##[endgroup] 2025-09-09T14:05:52.3679818Z ##[group]Run RUNNER_ARTIFACT_DIR="${RUNNER_TEMP}/artifacts" 2025-09-09T14:05:52.3680313Z RUNNER_ARTIFACT_DIR="${RUNNER_TEMP}/artifacts" 2025-09-09T14:05:52.3680723Z sudo rm -rf "${RUNNER_ARTIFACT_DIR}" 2025-09-09T14:05:52.3681071Z mkdir -p "${RUNNER_ARTIFACT_DIR}" 2025-09-09T14:05:52.3681525Z echo "RUNNER_ARTIFACT_DIR=${RUNNER_ARTIFACT_DIR}" >> "${GITHUB_ENV}" 2025-09-09T14:05:52.3681963Z  2025-09-09T14:05:52.3682257Z RUNNER_TEST_RESULTS_DIR="${RUNNER_TEMP}/test-results" 2025-09-09T14:05:52.3682708Z sudo rm -rf "${RUNNER_TEST_RESULTS_DIR}" 2025-09-09T14:05:52.3683069Z mkdir -p "${RUNNER_TEST_RESULTS_DIR}" 2025-09-09T14:05:52.3683557Z echo "RUNNER_TEST_RESULTS_DIR=${RUNNER_TEST_RESULTS_DIR}" >> "${GITHUB_ENV}" 2025-09-09T14:05:52.3684027Z  2025-09-09T14:05:52.3684255Z RUNNER_DOCS_DIR="${RUNNER_TEMP}/docs" 2025-09-09T14:05:52.3684608Z sudo rm -rf "${RUNNER_DOCS_DIR}" 2025-09-09T14:05:52.3684926Z mkdir -p "${RUNNER_DOCS_DIR}" 2025-09-09T14:05:52.3685336Z echo "RUNNER_DOCS_DIR=${RUNNER_DOCS_DIR}" >> "${GITHUB_ENV}" 2025-09-09T14:05:52.3690847Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:52.3691222Z env: 2025-09-09T14:05:52.3691460Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:52.3691803Z REPOSITORY: pytorch/ao 2025-09-09T14:05:52.3692049Z PR_NUMBER: 2963 2025-09-09T14:05:52.3693561Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:52.3695259Z ##[endgroup] 2025-09-09T14:05:52.9422531Z ##[group]Run needs=0 2025-09-09T14:05:52.9422810Z needs=0 2025-09-09T14:05:52.9423177Z if lspci -v | grep -e 'controller.*NVIDIA' >/dev/null 2>/dev/null; then 2025-09-09T14:05:52.9423620Z  needs=1 2025-09-09T14:05:52.9423836Z fi 2025-09-09T14:05:52.9424093Z echo "does=${needs}" >> $GITHUB_OUTPUT 2025-09-09T14:05:52.9430871Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:52.9431252Z env: 2025-09-09T14:05:52.9431492Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:52.9431836Z REPOSITORY: pytorch/ao 2025-09-09T14:05:52.9432093Z PR_NUMBER: 2963 2025-09-09T14:05:52.9433610Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:52.9435417Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:52.9436022Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:52.9436572Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:52.9436968Z ##[endgroup] 2025-09-09T14:05:52.9674076Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-09-09T14:05:52.9674679Z # ignore expansion of "docker ps -q" since it could be empty 2025-09-09T14:05:52.9675217Z # shellcheck disable=SC2046 2025-09-09T14:05:52.9675748Z docker stop $(docker ps -q) || true 2025-09-09T14:05:52.9676106Z # Prune all of the docker images 2025-09-09T14:05:52.9676425Z docker system prune -af 2025-09-09T14:05:52.9682153Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:52.9682533Z env: 2025-09-09T14:05:52.9682780Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:52.9683105Z REPOSITORY: pytorch/ao 2025-09-09T14:05:52.9683362Z PR_NUMBER: 2963 2025-09-09T14:05:52.9684863Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:52.9686559Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:52.9687161Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:52.9687708Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:52.9688099Z ##[endgroup] 2025-09-09T14:05:52.9925003Z "docker stop" requires at least 1 argument. 2025-09-09T14:05:52.9925492Z See 'docker stop --help'. 2025-09-09T14:05:52.9925676Z 2025-09-09T14:05:52.9925838Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-09-09T14:05:52.9926112Z 2025-09-09T14:05:52.9926232Z Stop one or more running containers 2025-09-09T14:05:53.0093524Z Total reclaimed space: 0B 2025-09-09T14:05:53.0174037Z ##[group]Run ./test-infra/.github/actions/setup-ssh 2025-09-09T14:05:53.0174423Z with: 2025-09-09T14:05:53.0175070Z github-secret: *** 2025-09-09T14:05:53.0175775Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-09-09T14:05:53.0176554Z activate-with-label: false 2025-09-09T14:05:53.0176821Z label: with-ssh 2025-09-09T14:05:53.0177064Z remove-existing-keys: true 2025-09-09T14:05:53.0177340Z fail-silently: true 2025-09-09T14:05:53.0177561Z env: 2025-09-09T14:05:53.0177801Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:53.0178300Z REPOSITORY: pytorch/ao 2025-09-09T14:05:53.0178555Z PR_NUMBER: 2963 2025-09-09T14:05:53.0180084Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:53.0181771Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:53.0182376Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:53.0182939Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:53.0183322Z ##[endgroup] 2025-09-09T14:05:53.1267349Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2025-09-09T14:05:53.6674089Z Grabbing public ssh keys from https://github.com/andrewor14.keys 2025-09-09T14:05:53.7486959Z ~/.ssh/authorized_keys file found on node, removing ~/.ssh and starting fresh 2025-09-09T14:05:53.7501138Z Public keys pulled and installed to /home/ec2-user/.ssh/authorized_keys 2025-09-09T14:05:53.7544867Z Login using: ssh ec2-user@ec2-3-93-16-8.compute-1.amazonaws.com 2025-09-09T14:05:53.7545917Z All testing is done inside the container, to start an interactive session run: 2025-09-09T14:05:53.7546958Z docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-09-09T14:05:53.7671301Z ##[group]Run actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 2025-09-09T14:05:53.7671775Z with: 2025-09-09T14:05:53.7671995Z repository: pytorch/ao 2025-09-09T14:05:53.7672249Z ref: refs/pull/2963/merge 2025-09-09T14:05:53.7672512Z path: pytorch/ao 2025-09-09T14:05:53.7672733Z fetch-depth: 1 2025-09-09T14:05:53.7672964Z submodules: recursive 2025-09-09T14:05:53.7673359Z token: *** 2025-09-09T14:05:53.7673596Z ssh-strict: true 2025-09-09T14:05:53.7673811Z ssh-user: git 2025-09-09T14:05:53.7674053Z persist-credentials: true 2025-09-09T14:05:53.7674324Z clean: true 2025-09-09T14:05:53.7674553Z sparse-checkout-cone-mode: true 2025-09-09T14:05:53.7674950Z fetch-tags: false 2025-09-09T14:05:53.7675178Z show-progress: true 2025-09-09T14:05:53.7675419Z lfs: false 2025-09-09T14:05:53.7675632Z set-safe-directory: true 2025-09-09T14:05:53.7675889Z env: 2025-09-09T14:05:53.7676119Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:53.7676504Z REPOSITORY: pytorch/ao 2025-09-09T14:05:53.7676748Z PR_NUMBER: 2963 2025-09-09T14:05:53.7678253Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:53.7679954Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:53.7680539Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:53.7681100Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:53.7681479Z ##[endgroup] 2025-09-09T14:05:53.8647567Z Syncing repository: pytorch/ao 2025-09-09T14:05:53.8654876Z ##[group]Getting Git version info 2025-09-09T14:05:53.8655378Z Working directory is '/home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao' 2025-09-09T14:05:53.8681058Z [command]/usr/bin/git version 2025-09-09T14:05:53.8715485Z git version 2.47.1 2025-09-09T14:05:53.8739451Z ##[endgroup] 2025-09-09T14:05:53.8758828Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/fbc93d9a-a9a0-40ed-8cdf-d6f28f42ec88' before making global git config changes 2025-09-09T14:05:53.8759793Z Adding repository directory to the temporary git global config as a safe directory 2025-09-09T14:05:53.8763661Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao 2025-09-09T14:05:53.8790196Z ##[group]Initializing the repository 2025-09-09T14:05:53.8794954Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao 2025-09-09T14:05:53.8825949Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-09-09T14:05:53.8826595Z hint: is subject to change. To configure the initial branch name to use in all 2025-09-09T14:05:53.8827201Z hint: of your new repositories, which will suppress this warning, call: 2025-09-09T14:05:53.8827612Z hint: 2025-09-09T14:05:53.8827898Z hint: git config --global init.defaultBranch 2025-09-09T14:05:53.8828236Z hint: 2025-09-09T14:05:53.8828568Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-09-09T14:05:53.8829133Z hint: 'development'. The just-created branch can be renamed via this command: 2025-09-09T14:05:53.8829585Z hint: 2025-09-09T14:05:53.8829788Z hint: git branch -m 2025-09-09T14:05:53.8830297Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao/.git/ 2025-09-09T14:05:53.8835758Z [command]/usr/bin/git remote add origin https://github.com/pytorch/ao 2025-09-09T14:05:53.8860260Z ##[endgroup] 2025-09-09T14:05:53.8861031Z ##[group]Disabling automatic garbage collection 2025-09-09T14:05:53.8864192Z [command]/usr/bin/git config --local gc.auto 0 2025-09-09T14:05:53.8890141Z ##[endgroup] 2025-09-09T14:05:53.8891134Z ##[group]Setting up auth 2025-09-09T14:05:53.8895801Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-09-09T14:05:53.8920571Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-09-09T14:05:53.9179682Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-09-09T14:05:53.9205588Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-09-09T14:05:53.9465649Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-09T14:05:53.9516178Z ##[endgroup] 2025-09-09T14:05:53.9516939Z ##[group]Fetching the repository 2025-09-09T14:05:53.9524019Z [command]/usr/bin/git -c protocol.version=2 fetch --no-tags --prune --no-recurse-submodules --depth=1 origin +refs/pull/2963/merge:refs/remotes/pull/2963/merge 2025-09-09T14:05:54.6869464Z From https://github.com/pytorch/ao 2025-09-09T14:05:54.6869892Z * [new ref] refs/pull/2963/merge -> pull/2963/merge 2025-09-09T14:05:54.6889928Z ##[endgroup] 2025-09-09T14:05:54.6890354Z ##[group]Determining the checkout info 2025-09-09T14:05:54.6892067Z ##[endgroup] 2025-09-09T14:05:54.6896453Z [command]/usr/bin/git sparse-checkout disable 2025-09-09T14:05:54.6928321Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-09-09T14:05:54.6950992Z ##[group]Checking out the ref 2025-09-09T14:05:54.6954559Z [command]/usr/bin/git checkout --progress --force refs/remotes/pull/2963/merge 2025-09-09T14:05:54.7998613Z Note: switching to 'refs/remotes/pull/2963/merge'. 2025-09-09T14:05:54.7998921Z 2025-09-09T14:05:54.7999161Z You are in 'detached HEAD' state. You can look around, make experimental 2025-09-09T14:05:54.7999762Z changes and commit them, and you can discard any commits you make in this 2025-09-09T14:05:54.8000325Z state without impacting any branches by switching back to a branch. 2025-09-09T14:05:54.8000659Z 2025-09-09T14:05:54.8000882Z If you want to create a new branch to retain commits you create, you may 2025-09-09T14:05:54.8001383Z do so (now or later) by using -c with the switch command. Example: 2025-09-09T14:05:54.8001690Z 2025-09-09T14:05:54.8001801Z git switch -c 2025-09-09T14:05:54.8002248Z 2025-09-09T14:05:54.8002361Z Or undo this operation with: 2025-09-09T14:05:54.8002553Z 2025-09-09T14:05:54.8002645Z git switch - 2025-09-09T14:05:54.8002772Z 2025-09-09T14:05:54.8003024Z Turn off this advice by setting config variable advice.detachedHead to false 2025-09-09T14:05:54.8003375Z 2025-09-09T14:05:54.8003781Z HEAD is now at 7c05f81 Merge c21284c127b039bc49cc7ffda0e692894ed3b094 into 8b72284fd363b5c096de93fb7ac9cc960a6a601e 2025-09-09T14:05:54.8008036Z ##[endgroup] 2025-09-09T14:05:54.8008471Z ##[group]Setting up auth for fetching submodules 2025-09-09T14:05:54.8013937Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-09T14:05:54.8054633Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-09-09T14:05:54.8077220Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-09-09T14:05:54.8101966Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-09-09T14:05:54.8123422Z ##[endgroup] 2025-09-09T14:05:54.8123840Z ##[group]Fetching submodules 2025-09-09T14:05:54.8126996Z [command]/usr/bin/git submodule sync --recursive 2025-09-09T14:05:54.8380114Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --depth=1 --recursive 2025-09-09T14:05:54.8628337Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass) registered for path 'third_party/cutlass' 2025-09-09T14:05:54.8649298Z Cloning into '/home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao/third_party/cutlass'... 2025-09-09T14:05:56.5173669Z From https://github.com/NVIDIA/cutlass 2025-09-09T14:05:56.5174531Z * branch e51efbfe18fe4f4cbb66ab814c55bf4aa0185491 -> FETCH_HEAD 2025-09-09T14:05:57.1058739Z Submodule path 'third_party/cutlass': checked out 'e51efbfe18fe4f4cbb66ab814c55bf4aa0185491' 2025-09-09T14:05:57.1095758Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-09-09T14:05:57.1345849Z Entering 'third_party/cutlass' 2025-09-09T14:05:57.1402345Z ##[endgroup] 2025-09-09T14:05:57.1403100Z ##[group]Persisting credentials for submodules 2025-09-09T14:05:57.1408799Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-09-09T14:05:57.1653986Z Entering 'third_party/cutlass' 2025-09-09T14:05:57.1724444Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-09-09T14:05:57.1967224Z Entering 'third_party/cutlass' 2025-09-09T14:05:57.2013815Z file:/home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao/.git/modules/third_party/cutlass/config remote.origin.url 2025-09-09T14:05:57.2064409Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-09-09T14:05:57.2311029Z Entering 'third_party/cutlass' 2025-09-09T14:05:57.2370736Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-09-09T14:05:57.2615904Z Entering 'third_party/cutlass' 2025-09-09T14:05:57.2670121Z ##[endgroup] 2025-09-09T14:05:57.2700383Z [command]/usr/bin/git log -1 --format=%H 2025-09-09T14:05:57.2719677Z 7c05f811b89289f7be3e0e3546626827f2cc1ca4 2025-09-09T14:05:57.3024530Z Prepare all required actions 2025-09-09T14:05:57.3025418Z Getting action download info 2025-09-09T14:05:57.4592678Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-09-09T14:05:57.6571921Z ##[group]Run ./test-infra/.github/actions/calculate-docker-image 2025-09-09T14:05:57.6572317Z with: 2025-09-09T14:05:57.6572558Z use-custom-docker-registry: true 2025-09-09T14:05:57.6573089Z docker-image-name: pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6573467Z docker-build-dir: .ci/docker 2025-09-09T14:05:57.6573760Z working-directory: pytorch/ao 2025-09-09T14:05:57.6574046Z docker-build-script: ./build.sh 2025-09-09T14:05:57.6574440Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:57.6574828Z force-push: false 2025-09-09T14:05:57.6575055Z env: 2025-09-09T14:05:57.6575287Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6575624Z REPOSITORY: pytorch/ao 2025-09-09T14:05:57.6575916Z PR_NUMBER: 2963 2025-09-09T14:05:57.6577403Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:57.6579098Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:57.6579704Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:57.6580250Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:57.6580644Z ##[endgroup] 2025-09-09T14:05:57.6611081Z ##[group]Run set -ex 2025-09-09T14:05:57.6611412Z set -ex 2025-09-09T14:05:57.6611621Z  2025-09-09T14:05:57.6612033Z # If the docker build directory or the build script doesn't exist, the action will 2025-09-09T14:05:57.6612777Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-09-09T14:05:57.6613351Z # job could then download the pre-built image as usual 2025-09-09T14:05:57.6614051Z if [[ -d "${DOCKER_BUILD_DIR}" ]] && [[ -f "${DOCKER_BUILD_DIR}/${DOCKER_BUILD_SCRIPT}" ]] && [[ "${USE_CUSTOM_DOCKER_REGISTRY}" == "true" ]]; then 2025-09-09T14:05:57.6614702Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6615031Z else 2025-09-09T14:05:57.6615276Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6615722Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6616131Z  2025-09-09T14:05:57.6616681Z  echo "Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ${REPO_NAME} repo..." 2025-09-09T14:05:57.6617328Z  exit 0 2025-09-09T14:05:57.6617536Z fi 2025-09-09T14:05:57.6617741Z  2025-09-09T14:05:57.6618065Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-09-09T14:05:57.6618673Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-09-09T14:05:57.6619214Z  # use it as it is, but first let's extract the tag 2025-09-09T14:05:57.6619689Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-09-09T14:05:57.6620206Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6620693Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6621106Z else 2025-09-09T14:05:57.6621358Z  if [[ "${DOCKER_IMAGE_NAME}" == *:* ]]; then 2025-09-09T14:05:57.6621748Z  CUSTOM_TAG_PREFIX=${DOCKER_IMAGE_NAME#*:} 2025-09-09T14:05:57.6622143Z  DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME%%:*} 2025-09-09T14:05:57.6622644Z  fi 2025-09-09T14:05:57.6623112Z  DOCKER_TAG=${CUSTOM_TAG_PREFIX:+${CUSTOM_TAG_PREFIX}-}$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-09-09T14:05:57.6623725Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6624375Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6625188Z  echo "custom-tag-prefix=${CUSTOM_TAG_PREFIX}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:05:57.6625610Z fi 2025-09-09T14:05:57.6631328Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:57.6631693Z env: 2025-09-09T14:05:57.6631944Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6632274Z REPOSITORY: pytorch/ao 2025-09-09T14:05:57.6632531Z PR_NUMBER: 2963 2025-09-09T14:05:57.6634025Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:57.6635819Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:57.6636420Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:57.6636980Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:57.6637381Z REPO_NAME: ao 2025-09-09T14:05:57.6637664Z DOCKER_IMAGE_NAME: pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6638012Z DOCKER_BUILD_DIR: .ci/docker 2025-09-09T14:05:57.6638299Z DOCKER_BUILD_SCRIPT: ./build.sh 2025-09-09T14:05:57.6638667Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:57.6639071Z USE_CUSTOM_DOCKER_REGISTRY: true 2025-09-09T14:05:57.6639347Z CUSTOM_TAG_PREFIX: 2025-09-09T14:05:57.6639588Z ##[endgroup] 2025-09-09T14:05:57.6667214Z + [[ -d .ci/docker ]] 2025-09-09T14:05:57.6667478Z + echo skip=true 2025-09-09T14:05:57.6667799Z + echo docker-image=pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6668517Z + echo 'Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ao repo...' 2025-09-09T14:05:57.6669121Z + exit 0 2025-09-09T14:05:57.6669594Z Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ao repo... 2025-09-09T14:05:57.6713475Z ##[group]Run set -eux 2025-09-09T14:05:57.6713780Z set -eux 2025-09-09T14:05:57.6714186Z # It's ok if this steps fails, it would then be an anonymous user like what we used to have 2025-09-09T14:05:57.6715457Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin || true 2025-09-09T14:05:57.6721807Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:57.6722172Z env: 2025-09-09T14:05:57.6722423Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:57.6722754Z REPOSITORY: pytorch/ao 2025-09-09T14:05:57.6723012Z PR_NUMBER: 2963 2025-09-09T14:05:57.6724491Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:57.6726202Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:57.6726802Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:57.6727364Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:57.6727744Z ##[endgroup] 2025-09-09T14:05:57.6754159Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-09-09T14:05:57.6755027Z + jq --raw-output .SecretString 2025-09-09T14:05:57.6755906Z + jq -r .docker_hub_readonly_token 2025-09-09T14:05:57.6757172Z + docker login --username pytorchbot --password-stdin 2025-09-09T14:05:58.2874740Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-09T14:05:58.2875748Z Configure a credential helper to remove this warning. See 2025-09-09T14:05:58.2876338Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-09T14:05:58.2876731Z 2025-09-09T14:05:58.2876921Z Login Succeeded 2025-09-09T14:05:58.2954897Z Prepare all required actions 2025-09-09T14:05:58.2995709Z ##[group]Run ./test-infra/.github/actions/pull-docker-image 2025-09-09T14:05:58.2996082Z with: 2025-09-09T14:05:58.2996337Z docker-image: pytorch/almalinux-builder:cpu 2025-09-09T14:05:58.2996764Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:58.2997157Z env: 2025-09-09T14:05:58.2997396Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:58.2997724Z REPOSITORY: pytorch/ao 2025-09-09T14:05:58.2997985Z PR_NUMBER: 2963 2025-09-09T14:05:58.2999474Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:58.3001229Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:58.3001816Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:58.3002381Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:58.3002777Z ##[endgroup] 2025-09-09T14:05:58.3028347Z ##[group]Run set -x 2025-09-09T14:05:58.3028629Z set -x 2025-09-09T14:05:58.3028855Z set +e 2025-09-09T14:05:58.3029062Z  2025-09-09T14:05:58.3029281Z login() { 2025-09-09T14:05:58.3029750Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-09-09T14:05:58.3030279Z } 2025-09-09T14:05:58.3030471Z  2025-09-09T14:05:58.3030677Z retry () { 2025-09-09T14:05:58.3030946Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-09-09T14:05:58.3031258Z } 2025-09-09T14:05:58.3031479Z  2025-09-09T14:05:58.3031699Z retry login "${DOCKER_REGISTRY}" 2025-09-09T14:05:58.3032024Z  2025-09-09T14:05:58.3032506Z IMAGE_SIZE=$(docker manifest inspect "${DOCKER_IMAGE}" | jq '[.layers[].size, .config.size] | add / 1024 / 1024') 2025-09-09T14:05:58.3033187Z echo "Compressed size of image in MB: ${IMAGE_SIZE}" 2025-09-09T14:05:58.3033550Z  2025-09-09T14:05:58.3033761Z set -e 2025-09-09T14:05:58.3034106Z # ignore output since only exit code is used for conditional 2025-09-09T14:05:58.3034594Z # only pull docker image if it's not available locally 2025-09-09T14:05:58.3035253Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-09-09T14:05:58.3035752Z  retry docker pull "${DOCKER_IMAGE}" 2025-09-09T14:05:58.3036084Z fi 2025-09-09T14:05:58.3042505Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:05:58.3042887Z env: 2025-09-09T14:05:58.3043125Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:05:58.3043492Z REPOSITORY: pytorch/ao 2025-09-09T14:05:58.3043736Z PR_NUMBER: 2963 2025-09-09T14:05:58.3045237Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:05:58.3046934Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:05:58.3047522Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:05:58.3048078Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:05:58.3048712Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:58.3049091Z ##[endgroup] 2025-09-09T14:05:58.3076191Z + set +e 2025-09-09T14:05:58.3077017Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:58.3077804Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:58.3079568Z + aws ecr get-login-password --region us-east-1 2025-09-09T14:05:58.3080875Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-09T14:05:58.8540001Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-09T14:05:58.8540618Z Configure a credential helper to remove this warning. See 2025-09-09T14:05:58.8541271Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-09T14:05:58.8541758Z 2025-09-09T14:05:58.8541865Z Login Succeeded 2025-09-09T14:05:58.8558558Z ++ docker manifest inspect pytorch/almalinux-builder:cpu 2025-09-09T14:05:58.8559522Z ++ jq '[.layers[].size, .config.size] | add / 1024 / 1024' 2025-09-09T14:05:59.0186302Z + IMAGE_SIZE=1439.2328958511353 2025-09-09T14:05:59.0186727Z + echo 'Compressed size of image in MB: 1439.2328958511353' 2025-09-09T14:05:59.0187140Z + set -e 2025-09-09T14:05:59.0187436Z + docker inspect --type=image pytorch/almalinux-builder:cpu 2025-09-09T14:05:59.0187874Z Compressed size of image in MB: 1439.2328958511353 2025-09-09T14:05:59.0293654Z + retry docker pull pytorch/almalinux-builder:cpu 2025-09-09T14:05:59.0294066Z + docker pull pytorch/almalinux-builder:cpu 2025-09-09T14:05:59.1827789Z cpu: Pulling from pytorch/almalinux-builder 2025-09-09T14:05:59.1828416Z 19877a9af8e3: Pulling fs layer 2025-09-09T14:05:59.1828851Z fe05152297d3: Pulling fs layer 2025-09-09T14:05:59.1829145Z 9c5a63e97f59: Pulling fs layer 2025-09-09T14:05:59.1829539Z 918715f58173: Pulling fs layer 2025-09-09T14:05:59.1829816Z 692d6799dd80: Pulling fs layer 2025-09-09T14:05:59.1830191Z c6352f35dfa2: Pulling fs layer 2025-09-09T14:05:59.1830659Z 518054e53c81: Pulling fs layer 2025-09-09T14:05:59.1830963Z 4f4fb700ef54: Pulling fs layer 2025-09-09T14:05:59.1831295Z 3b571ac2ab3b: Pulling fs layer 2025-09-09T14:05:59.1831585Z 84008f185523: Pulling fs layer 2025-09-09T14:05:59.1831890Z 9ee5aeef32d7: Pulling fs layer 2025-09-09T14:05:59.1832176Z c6352f35dfa2: Waiting 2025-09-09T14:05:59.1832414Z a80ec369bee3: Pulling fs layer 2025-09-09T14:05:59.1832687Z f1417b667e9d: Pulling fs layer 2025-09-09T14:05:59.1832935Z 518054e53c81: Waiting 2025-09-09T14:05:59.1833178Z 0c3cc5825672: Pulling fs layer 2025-09-09T14:05:59.1833438Z 895a870a9edd: Pulling fs layer 2025-09-09T14:05:59.1833702Z 692d6799dd80: Waiting 2025-09-09T14:05:59.1833946Z b7eb993f501a: Pulling fs layer 2025-09-09T14:05:59.1834201Z 4f4fb700ef54: Waiting 2025-09-09T14:05:59.1834440Z 9ee5aeef32d7: Waiting 2025-09-09T14:05:59.1834662Z 895a870a9edd: Waiting 2025-09-09T14:05:59.1835009Z 4d4d94988ad5: Pulling fs layer 2025-09-09T14:05:59.1835263Z f1417b667e9d: Waiting 2025-09-09T14:05:59.1835505Z b7eb993f501a: Waiting 2025-09-09T14:05:59.1835720Z 4d4d94988ad5: Waiting 2025-09-09T14:05:59.1835946Z 918715f58173: Waiting 2025-09-09T14:05:59.1836161Z a80ec369bee3: Waiting 2025-09-09T14:05:59.3608490Z 9c5a63e97f59: Verifying Checksum 2025-09-09T14:05:59.3608860Z 9c5a63e97f59: Download complete 2025-09-09T14:05:59.7364132Z 918715f58173: Verifying Checksum 2025-09-09T14:05:59.7364516Z 918715f58173: Download complete 2025-09-09T14:05:59.9278641Z 19877a9af8e3: Download complete 2025-09-09T14:05:59.9586689Z c6352f35dfa2: Verifying Checksum 2025-09-09T14:05:59.9587075Z c6352f35dfa2: Download complete 2025-09-09T14:06:00.4418237Z 518054e53c81: Verifying Checksum 2025-09-09T14:06:00.4418612Z 518054e53c81: Download complete 2025-09-09T14:06:00.4670787Z fe05152297d3: Verifying Checksum 2025-09-09T14:06:00.4671131Z fe05152297d3: Download complete 2025-09-09T14:06:00.4887402Z 4f4fb700ef54: Verifying Checksum 2025-09-09T14:06:00.4887950Z 4f4fb700ef54: Download complete 2025-09-09T14:06:00.5684097Z 84008f185523: Download complete 2025-09-09T14:06:00.6146759Z 3b571ac2ab3b: Download complete 2025-09-09T14:06:00.6846756Z a80ec369bee3: Verifying Checksum 2025-09-09T14:06:00.6847203Z a80ec369bee3: Download complete 2025-09-09T14:06:00.7234996Z f1417b667e9d: Download complete 2025-09-09T14:06:00.7710440Z 0c3cc5825672: Download complete 2025-09-09T14:06:00.9203798Z 895a870a9edd: Verifying Checksum 2025-09-09T14:06:00.9204184Z 895a870a9edd: Download complete 2025-09-09T14:06:00.9609284Z b7eb993f501a: Verifying Checksum 2025-09-09T14:06:00.9610280Z b7eb993f501a: Download complete 2025-09-09T14:06:01.6238082Z 692d6799dd80: Verifying Checksum 2025-09-09T14:06:01.6238437Z 692d6799dd80: Download complete 2025-09-09T14:06:01.9783107Z 19877a9af8e3: Pull complete 2025-09-09T14:06:04.0138956Z fe05152297d3: Pull complete 2025-09-09T14:06:04.1849584Z 9c5a63e97f59: Pull complete 2025-09-09T14:06:04.5082428Z 918715f58173: Pull complete 2025-09-09T14:06:04.9298876Z 9ee5aeef32d7: Verifying Checksum 2025-09-09T14:06:04.9299474Z 9ee5aeef32d7: Download complete 2025-09-09T14:06:06.9050074Z 4d4d94988ad5: Verifying Checksum 2025-09-09T14:06:06.9050429Z 4d4d94988ad5: Download complete 2025-09-09T14:06:09.2923109Z 692d6799dd80: Pull complete 2025-09-09T14:06:09.3911866Z c6352f35dfa2: Pull complete 2025-09-09T14:06:10.3658827Z 518054e53c81: Pull complete 2025-09-09T14:06:10.4660068Z 4f4fb700ef54: Pull complete 2025-09-09T14:06:10.6748262Z 3b571ac2ab3b: Pull complete 2025-09-09T14:06:10.8056926Z 84008f185523: Pull complete 2025-09-09T14:06:22.2686397Z 9ee5aeef32d7: Pull complete 2025-09-09T14:06:22.7241241Z a80ec369bee3: Pull complete 2025-09-09T14:06:23.1089284Z f1417b667e9d: Pull complete 2025-09-09T14:06:23.6089251Z 0c3cc5825672: Pull complete 2025-09-09T14:06:24.3142223Z 895a870a9edd: Pull complete 2025-09-09T14:06:24.7987757Z b7eb993f501a: Pull complete 2025-09-09T14:06:38.3502403Z 4d4d94988ad5: Pull complete 2025-09-09T14:06:38.3619424Z Digest: sha256:10f309602e8cd84e21cb6970f97544761dd12a06b141583ab4d45f0bac4bf651 2025-09-09T14:06:38.3661799Z Status: Downloaded newer image for pytorch/almalinux-builder:cpu 2025-09-09T14:06:38.3690956Z docker.io/pytorch/almalinux-builder:cpu 2025-09-09T14:06:38.3747346Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-09T14:06:38.3748351Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-09T14:06:38.3759624Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:06:38.3760016Z env: 2025-09-09T14:06:38.3760257Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:06:38.3760604Z REPOSITORY: pytorch/ao 2025-09-09T14:06:38.3760856Z PR_NUMBER: 2963 2025-09-09T14:06:38.3762431Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:06:38.3764137Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:06:38.3764737Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:06:38.3765304Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:06:38.3765687Z ##[endgroup] 2025-09-09T14:06:38.3914844Z ##[group]Run set -ex 2025-09-09T14:06:38.3915169Z set -ex 2025-09-09T14:06:38.3915380Z { 2025-09-09T14:06:38.3915611Z  echo "#!/usr/bin/env bash"; 2025-09-09T14:06:38.3915936Z  echo "set -eou pipefail"; 2025-09-09T14:06:38.3916245Z  # shellcheck disable=SC2016 2025-09-09T14:06:38.3916590Z  echo 'eval "$(conda shell.bash hook)"'; 2025-09-09T14:06:38.3916921Z  echo "set -x"; 2025-09-09T14:06:38.3917189Z  echo "${SCRIPT}"; 2025-09-09T14:06:38.3917698Z } > "${RUNNER_TEMP}/exec_script" 2025-09-09T14:06:38.3918041Z chmod +x "${RUNNER_TEMP}/exec_script" 2025-09-09T14:06:38.3918696Z python3 "/home/ec2-user/actions-runner/_work/ao/ao/test-infra/.github/scripts/run_with_env_secrets.py" "" 2025-09-09T14:06:38.3925298Z shell: /usr/bin/bash -e {0} 2025-09-09T14:06:38.3925556Z env: 2025-09-09T14:06:38.3925817Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:06:38.3926199Z REPOSITORY: pytorch/ao 2025-09-09T14:06:38.3926449Z PR_NUMBER: 2963 2025-09-09T14:06:38.3927959Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:06:38.3929658Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:06:38.3930266Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:06:38.3930828Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:06:38.3931470Z ALL_SECRETS: { "github_token": "***" } 2025-09-09T14:06:38.3931775Z ##[endgroup] 2025-09-09T14:06:38.3959432Z + echo '#!/usr/bin/env bash' 2025-09-09T14:06:38.3959753Z + echo 'set -eou pipefail' 2025-09-09T14:06:38.3960069Z + echo 'eval "$(conda shell.bash hook)"' 2025-09-09T14:06:38.3960375Z + echo 'set -x' 2025-09-09T14:06:38.3960636Z + echo 'conda create -n venv python=3.9 -y 2025-09-09T14:06:38.3960960Z conda activate venv 2025-09-09T14:06:38.3961211Z python -m pip install --upgrade pip 2025-09-09T14:06:38.3961707Z pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu 2025-09-09T14:06:38.3962200Z pip install -r dev-requirements.txt 2025-09-09T14:06:38.3962504Z pip install . 2025-09-09T14:06:38.3962783Z export CONDA=$(dirname $(dirname $(which conda))) 2025-09-09T14:06:38.3963184Z export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH 2025-09-09T14:06:38.3963534Z pytest test --verbose -s 2025-09-09T14:06:38.3963787Z ' 2025-09-09T14:06:38.3964073Z + chmod +x /home/ec2-user/actions-runner/_work/_temp/exec_script 2025-09-09T14:06:38.3973543Z + python3 /home/ec2-user/actions-runner/_work/ao/ao/test-infra/.github/scripts/run_with_env_secrets.py '' 2025-09-09T14:06:56.3554537Z Running command: 2025-09-09T14:06:56.3561467Z docker run -e PR_NUMBER -e RUNNER_ARTIFACT_DIR=/artifacts -e RUNNER_DOCS_DIR=/docs -e RUNNER_TEST_RESULTS_DIR=/test-results --env-file="/home/ec2-user/actions-runner/_work/_temp/github_env_17585175130" `# It is unknown why the container sees a different value for this.` -e GITHUB_STEP_SUMMARY -e SECRET_GITHUB_TOKEN --cap-add=SYS_PTRACE --detach --ipc=host --security-opt seccomp=unconfined --shm-size=2g --tty --ulimit stack=10485760:83886080 --ulimit core=0 -v "/home/ec2-user/actions-runner/_work/ao/ao/pytorch/ao:/pytorch/ao" -v "/home/ec2-user/actions-runner/_work/ao/ao/test-infra:/test-infra" -v "/home/ec2-user/actions-runner/_work/_temp/artifacts:/artifacts" -v "/home/ec2-user/actions-runner/_work/_temp/docs:/docs" -v "/home/ec2-user/actions-runner/_work/_temp/test-results:/test-results" -v "/home/ec2-user/actions-runner/_work/_temp/exec_script:/exec" -v "/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_6693ad63-d2dd-463a-befa-0162a2078c2e":"/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_6693ad63-d2dd-463a-befa-0162a2078c2e" -w /pytorch/ao "pytorch/almalinux-builder:cpu" 2025-09-09T14:06:56.3566626Z 2025-09-09T14:06:56.3567189Z f3a755ba68cb7be3ae6465e0287a3d53dc5126ae70ee1cbee8ed8517704cf634 2025-09-09T14:06:56.3567893Z Running command: docker exec -t f3a755ba68cb7be3ae6465e0287a3d53dc5126ae70ee1cbee8ed8517704cf634 /exec 2025-09-09T14:06:56.3568505Z + conda create -n venv python=3.9 -y 2025-09-09T14:06:56.3568799Z + local cmd=create 2025-09-09T14:06:56.3569035Z + case "$cmd" in 2025-09-09T14:06:56.3569282Z + __conda_exe create -n venv python=3.9 -y 2025-09-09T14:06:56.3569663Z + /opt/conda/bin/conda create -n venv python=3.9 -y 2025-09-09T14:06:56.3570597Z Collecting package metadata (current_repodata.json): - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done 2025-09-09T14:06:56.3571219Z Solving environment: \ done 2025-09-09T14:06:56.3571418Z 2025-09-09T14:06:56.3571423Z 2025-09-09T14:06:56.3571555Z ==> WARNING: A newer version of conda exists. <== 2025-09-09T14:06:56.3571904Z current version: 23.5.2 2025-09-09T14:06:56.3572171Z latest version: 25.7.0 2025-09-09T14:06:56.3572338Z 2025-09-09T14:06:56.3572458Z Please update conda by running 2025-09-09T14:06:56.3572641Z 2025-09-09T14:06:56.3572759Z $ conda update -n base -c defaults conda 2025-09-09T14:06:56.3572983Z 2025-09-09T14:06:56.3573204Z Or to minimize the number of packages updated during conda update use 2025-09-09T14:06:56.3573528Z 2025-09-09T14:06:56.3573631Z conda install conda=25.7.0 2025-09-09T14:06:56.3573827Z 2025-09-09T14:06:56.3573831Z 2025-09-09T14:06:56.3573839Z 2025-09-09T14:06:56.3573932Z ## Package Plan ## 2025-09-09T14:06:56.3574072Z 2025-09-09T14:06:56.3574210Z environment location: /opt/conda/envs/venv 2025-09-09T14:06:56.3574441Z 2025-09-09T14:06:56.3574538Z added / updated specs: 2025-09-09T14:06:56.3574800Z - python=3.9 2025-09-09T14:06:56.3574938Z 2025-09-09T14:06:56.3574942Z 2025-09-09T14:06:56.3575066Z The following packages will be downloaded: 2025-09-09T14:06:56.3575308Z 2025-09-09T14:06:56.3575424Z package | build 2025-09-09T14:06:56.3575762Z ---------------------------|----------------- 2025-09-09T14:06:56.3576139Z bzip2-1.0.8 | h5eee18b_6 262 KB 2025-09-09T14:06:56.3576572Z ld_impl_linux-64-2.40 | h12ee557_0 710 KB 2025-09-09T14:06:56.3576984Z libffi-3.4.4 | h6a678d5_1 141 KB 2025-09-09T14:06:56.3577394Z libxcb-1.17.0 | h9b100fa_0 430 KB 2025-09-09T14:06:56.3577791Z ncurses-6.5 | h7934f7d_0 1.1 MB 2025-09-09T14:06:56.3578185Z pip-25.2 | pyhc872135_0 1.2 MB 2025-09-09T14:06:56.3578591Z pthread-stubs-0.3 | h0ce48e5_1 5 KB 2025-09-09T14:06:56.3579013Z python-3.9.23 | he99959a_0 24.7 MB 2025-09-09T14:06:56.3579427Z readline-8.3 | hc2a1206_0 471 KB 2025-09-09T14:06:56.3579845Z setuptools-78.1.1 | py39h06a4308_0 1.7 MB 2025-09-09T14:06:56.3580276Z sqlite-3.50.2 | hb25bd0a_1 1.1 MB 2025-09-09T14:06:56.3580655Z tk-8.6.15 | h54e0aa7_0 3.4 MB 2025-09-09T14:06:56.3581042Z tzdata-2025b | h04d1e81_0 116 KB 2025-09-09T14:06:56.3581437Z wheel-0.45.1 | py39h06a4308_0 114 KB 2025-09-09T14:06:56.3581853Z xorg-libx11-1.8.12 | h9b100fa_1 895 KB 2025-09-09T14:06:56.3582399Z xorg-libxau-1.0.12 | h9b100fa_0 13 KB 2025-09-09T14:06:56.3582833Z xorg-libxdmcp-1.1.5 | h9b100fa_0 19 KB 2025-09-09T14:06:56.3583290Z xorg-xorgproto-2024.1 | h5eee18b_1 580 KB 2025-09-09T14:06:56.3583691Z xz-5.6.4 | h5eee18b_1 567 KB 2025-09-09T14:06:56.3584078Z zlib-1.2.13 | h5eee18b_1 111 KB 2025-09-09T14:06:56.3584452Z ------------------------------------------------------------ 2025-09-09T14:06:56.3584903Z Total: 37.6 MB 2025-09-09T14:06:56.3585128Z 2025-09-09T14:06:56.3585274Z The following NEW packages will be INSTALLED: 2025-09-09T14:06:56.3585514Z 2025-09-09T14:06:56.3585729Z _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main 2025-09-09T14:06:56.3586209Z _openmp_mutex pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu 2025-09-09T14:06:56.3586654Z bzip2 pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_6 2025-09-09T14:06:56.3587177Z ca-certificates pkgs/main/linux-64::ca-certificates-2025.7.15-h06a4308_0 2025-09-09T14:06:56.3587705Z expat pkgs/main/linux-64::expat-2.7.1-h6a678d5_0 2025-09-09T14:06:56.3588175Z ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.40-h12ee557_0 2025-09-09T14:06:56.3588677Z libffi pkgs/main/linux-64::libffi-3.4.4-h6a678d5_1 2025-09-09T14:06:56.3589125Z libgcc-ng pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1 2025-09-09T14:06:56.3589599Z libgomp pkgs/main/linux-64::libgomp-11.2.0-h1234567_1 2025-09-09T14:06:56.3590092Z libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1 2025-09-09T14:06:56.3590566Z libxcb pkgs/main/linux-64::libxcb-1.17.0-h9b100fa_0 2025-09-09T14:06:56.3591012Z ncurses pkgs/main/linux-64::ncurses-6.5-h7934f7d_0 2025-09-09T14:06:56.3591453Z openssl pkgs/main/linux-64::openssl-3.0.17-h5eee18b_0 2025-09-09T14:06:56.3591888Z pip pkgs/main/noarch::pip-25.2-pyhc872135_0 2025-09-09T14:06:56.3592354Z pthread-stubs pkgs/main/linux-64::pthread-stubs-0.3-h0ce48e5_1 2025-09-09T14:06:56.3592852Z python pkgs/main/linux-64::python-3.9.23-he99959a_0 2025-09-09T14:06:56.3593312Z readline pkgs/main/linux-64::readline-8.3-hc2a1206_0 2025-09-09T14:06:56.3593803Z setuptools pkgs/main/linux-64::setuptools-78.1.1-py39h06a4308_0 2025-09-09T14:06:56.3594303Z sqlite pkgs/main/linux-64::sqlite-3.50.2-hb25bd0a_1 2025-09-09T14:06:56.3594791Z tk pkgs/main/linux-64::tk-8.6.15-h54e0aa7_0 2025-09-09T14:06:56.3595209Z tzdata pkgs/main/noarch::tzdata-2025b-h04d1e81_0 2025-09-09T14:06:56.3595659Z wheel pkgs/main/linux-64::wheel-0.45.1-py39h06a4308_0 2025-09-09T14:06:56.3596135Z xorg-libx11 pkgs/main/linux-64::xorg-libx11-1.8.12-h9b100fa_1 2025-09-09T14:06:56.3596655Z xorg-libxau pkgs/main/linux-64::xorg-libxau-1.0.12-h9b100fa_0 2025-09-09T14:06:56.3597176Z xorg-libxdmcp pkgs/main/linux-64::xorg-libxdmcp-1.1.5-h9b100fa_0 2025-09-09T14:06:56.3597741Z xorg-xorgproto pkgs/main/linux-64::xorg-xorgproto-2024.1-h5eee18b_1 2025-09-09T14:06:56.3598222Z xz pkgs/main/linux-64::xz-5.6.4-h5eee18b_1 2025-09-09T14:06:56.3598607Z zlib pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1 2025-09-09T14:06:56.3598866Z 2025-09-09T14:06:56.3598882Z 2025-09-09T14:06:56.3598886Z 2025-09-09T14:06:56.3599003Z Downloading and Extracting Packages 2025-09-09T14:06:56.3599207Z 2025-09-09T14:06:56.3599372Z xorg-libxau-1.0.12 | 13 KB | : 0% 0/1 [00:00=4.10.0 (from torch) 2025-09-09T14:07:05.8401846Z Downloading https://download.pytorch.org/whl/nightly/typing_extensions-4.14.1-py3-none-any.whl.metadata (3.0 kB) 2025-09-09T14:07:05.8402514Z Collecting sympy>=1.13.3 (from torch) 2025-09-09T14:07:05.8403093Z Downloading https://download.pytorch.org/whl/nightly/sympy-1.14.0-py3-none-any.whl.metadata (12 kB) 2025-09-09T14:07:05.8403704Z Collecting networkx>=2.5.1 (from torch) 2025-09-09T14:07:05.8404399Z Downloading https://download.pytorch.org/whl/nightly/networkx-3.5-py3-none-any.whl.metadata (6.3 kB) 2025-09-09T14:07:05.8405080Z Collecting jinja2 (from torch) 2025-09-09T14:07:05.8405660Z Downloading https://download.pytorch.org/whl/nightly/jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB) 2025-09-09T14:07:05.8406256Z Collecting fsspec>=0.8.5 (from torch) 2025-09-09T14:07:05.8406859Z Downloading https://download.pytorch.org/whl/nightly/fsspec-2025.7.0-py3-none-any.whl.metadata (12 kB) 2025-09-09T14:07:05.8407869Z INFO: pip is looking at multiple versions of networkx to determine which version is compatible with other requirements. This could take a while. 2025-09-09T14:07:05.8408601Z Collecting networkx>=2.5.1 (from torch) 2025-09-09T14:07:05.8409173Z Downloading https://download.pytorch.org/whl/nightly/networkx-3.2.1-py3-none-any.whl (1.6 MB) 2025-09-09T14:07:05.8412205Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.6 MB ? eta -:--:-- 2025-09-09T14:07:05.8412982Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 112.5 MB/s 0:00:00 2025-09-09T14:07:05.8413563Z [?25hCollecting mpmath<1.4,>=1.1.0 (from sympy>=1.13.3->torch) 2025-09-09T14:07:05.8414209Z Downloading https://download.pytorch.org/whl/nightly/mpmath-1.3.0-py3-none-any.whl (536 kB) 2025-09-09T14:07:05.8415007Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/536.2 kB ? eta -:--:-- 2025-09-09T14:07:05.8415798Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 36.1 MB/s 0:00:00 2025-09-09T14:07:05.8416384Z [?25hCollecting MarkupSafe>=2.0 (from jinja2->torch) 2025-09-09T14:07:05.8417215Z Downloading https://download.pytorch.org/whl/nightly/MarkupSafe-3.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB) 2025-09-09T14:07:05.8418430Z Downloading https://download.pytorch.org/whl/nightly/cpu/torch-2.9.0.dev20250825%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl (183.5 MB) 2025-09-09T14:07:05.8419398Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/183.5 MB ? eta -:--:-- 2025-09-09T14:07:05.8420149Z  ━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━ 67.1/183.5 MB 335.6 MB/s eta 0:00:01 2025-09-09T14:07:05.8420949Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━ 143.1/183.5 MB 356.6 MB/s eta 0:00:01 2025-09-09T14:07:05.8421728Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 183.5/183.5 MB 361.0 MB/s eta 0:00:01 2025-09-09T14:07:05.8422479Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 183.5/183.5 MB 361.0 MB/s eta 0:00:01 2025-09-09T14:07:13.4439441Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 183.5/183.5 MB 182.7 MB/s 0:00:01 2025-09-09T14:07:13.4440337Z [?25hDownloading https://download.pytorch.org/whl/nightly/fsspec-2025.7.0-py3-none-any.whl (199 kB) 2025-09-09T14:07:13.4441148Z Downloading https://download.pytorch.org/whl/nightly/sympy-1.14.0-py3-none-any.whl (6.3 MB) 2025-09-09T14:07:13.4441945Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/6.3 MB ? eta -:--:-- 2025-09-09T14:07:13.4442639Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 228.9 MB/s 0:00:00 2025-09-09T14:07:13.4443489Z [?25hDownloading https://download.pytorch.org/whl/nightly/typing_extensions-4.14.1-py3-none-any.whl (43 kB) 2025-09-09T14:07:13.4444362Z Downloading https://download.pytorch.org/whl/nightly/filelock-3.19.1-py3-none-any.whl (15 kB) 2025-09-09T14:07:13.4445159Z Downloading https://download.pytorch.org/whl/nightly/jinja2-3.1.6-py3-none-any.whl (134 kB) 2025-09-09T14:07:13.4446143Z Downloading https://download.pytorch.org/whl/nightly/MarkupSafe-3.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB) 2025-09-09T14:07:13.4447231Z Installing collected packages: mpmath, typing-extensions, sympy, networkx, MarkupSafe, fsspec, filelock, jinja2, torch 2025-09-09T14:07:13.4447885Z [?25l 2025-09-09T14:07:13.4448262Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/9 [mpmath] 2025-09-09T14:07:13.4448845Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4449699Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4450300Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4450881Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4451661Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4452252Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4452848Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4453442Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4454024Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4454649Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4455227Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4455820Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4456415Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4457020Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4457614Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4458194Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4458789Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4459383Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4459983Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4460578Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4461162Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4461852Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4462454Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4463036Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4463628Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/9 [sympy] 2025-09-09T14:07:13.4464219Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/9 [networkx] 2025-09-09T14:07:13.4464907Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/9 [networkx] 2025-09-09T14:07:13.4465515Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/9 [networkx] 2025-09-09T14:07:13.4466108Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/9 [networkx] 2025-09-09T14:07:13.4466734Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/9 [networkx] 2025-09-09T14:07:13.4467312Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━ 7/9 [jinja2] 2025-09-09T14:07:13.4467884Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:13.4468439Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:13.4469004Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:13.4469591Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:13.4470143Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:13.4470708Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8421069Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8421706Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8422287Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8422848Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8423417Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8424225Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8424793Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8425361Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8426068Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8426640Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8427188Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8427751Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8428317Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8428910Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8429471Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8430034Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8430605Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8431170Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8431722Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8432284Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8432846Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8433418Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8433978Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8434525Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8435179Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8435818Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8436388Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8436951Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8437505Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8438081Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8438726Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8439282Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8439849Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8440436Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8440994Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8441564Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8442122Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8442692Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8443277Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8443830Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8444402Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8445023Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8445593Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8446147Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━ 8/9 [torch] 2025-09-09T14:07:20.8446685Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9/9 [torch] 2025-09-09T14:07:20.8447059Z [?25h 2025-09-09T14:07:20.8447991Z Successfully installed MarkupSafe-3.0.2 filelock-3.19.1 fsspec-2025.7.0 jinja2-3.1.6 mpmath-1.3.0 networkx-3.2.1 sympy-1.14.0 torch-2.9.0.dev20250825+cpu typing-extensions-4.14.1 2025-09-09T14:07:25.3590443Z WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning. 2025-09-09T14:07:25.3592344Z + pip install -r dev-requirements.txt 2025-09-09T14:07:25.3592765Z Collecting pytest (from -r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3593250Z Downloading pytest-8.4.2-py3-none-any.whl.metadata (7.7 kB) 2025-09-09T14:07:25.3593816Z Collecting unittest-xml-reporting (from -r dev-requirements.txt (line 3)) 2025-09-09T14:07:25.3594453Z Downloading unittest_xml_reporting-3.2.0-py2.py3-none-any.whl.metadata (11 kB) 2025-09-09T14:07:25.3595135Z Collecting parameterized (from -r dev-requirements.txt (line 4)) 2025-09-09T14:07:25.3595716Z Downloading parameterized-0.9.0-py2.py3-none-any.whl.metadata (18 kB) 2025-09-09T14:07:25.3596268Z Collecting packaging (from -r dev-requirements.txt (line 5)) 2025-09-09T14:07:25.3596786Z Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB) 2025-09-09T14:07:25.3597305Z Collecting transformers (from -r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3597859Z Downloading transformers-4.56.1-py3-none-any.whl.metadata (42 kB) 2025-09-09T14:07:25.3598390Z Collecting hypothesis (from -r dev-requirements.txt (line 7)) 2025-09-09T14:07:25.3598934Z Downloading hypothesis-6.138.15-py3-none-any.whl.metadata (5.6 kB) 2025-09-09T14:07:25.3599489Z Collecting sentencepiece (from -r dev-requirements.txt (line 8)) 2025-09-09T14:07:25.3600189Z Downloading sentencepiece-0.2.1-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (10 kB) 2025-09-09T14:07:25.3600893Z Collecting expecttest (from -r dev-requirements.txt (line 9)) 2025-09-09T14:07:25.3601422Z Downloading expecttest-0.3.0-py3-none-any.whl.metadata (3.8 kB) 2025-09-09T14:07:25.3601968Z Collecting bitsandbytes (from -r dev-requirements.txt (line 12)) 2025-09-09T14:07:25.3602563Z Downloading bitsandbytes-0.47.0-py3-none-manylinux_2_24_x86_64.whl.metadata (11 kB) 2025-09-09T14:07:25.3603170Z Collecting matplotlib (from -r dev-requirements.txt (line 13)) 2025-09-09T14:07:25.3603857Z Downloading matplotlib-3.9.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2025-09-09T14:07:25.3604510Z Collecting pandas (from -r dev-requirements.txt (line 14)) 2025-09-09T14:07:25.3605147Z Downloading pandas-2.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (91 kB) 2025-09-09T14:07:25.3605802Z Collecting fire (from -r dev-requirements.txt (line 15)) 2025-09-09T14:07:25.3606266Z Downloading fire-0.7.1-py3-none-any.whl.metadata (5.8 kB) 2025-09-09T14:07:25.3606757Z Collecting tabulate (from -r dev-requirements.txt (line 16)) 2025-09-09T14:07:25.3607259Z Downloading tabulate-0.9.0-py3-none-any.whl.metadata (34 kB) 2025-09-09T14:07:25.3607772Z Collecting tiktoken (from -r dev-requirements.txt (line 17)) 2025-09-09T14:07:25.3608437Z Downloading tiktoken-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB) 2025-09-09T14:07:25.3609089Z Collecting blobfile (from -r dev-requirements.txt (line 18)) 2025-09-09T14:07:25.3609815Z Downloading blobfile-3.1.0-py3-none-any.whl.metadata (15 kB) 2025-09-09T14:07:25.3610608Z Collecting lm_eval (from -r dev-requirements.txt (line 19)) 2025-09-09T14:07:25.3611113Z Downloading lm_eval-0.4.9.1-py3-none-any.whl.metadata (53 kB) 2025-09-09T14:07:25.3611614Z Collecting diskcache (from -r dev-requirements.txt (line 21)) 2025-09-09T14:07:25.3612139Z Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) 2025-09-09T14:07:25.3612668Z Collecting pycocotools (from -r dev-requirements.txt (line 22)) 2025-09-09T14:07:25.3613348Z Downloading pycocotools-2.0.10-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.3 kB) 2025-09-09T14:07:25.3614154Z Collecting tqdm (from -r dev-requirements.txt (line 23)) 2025-09-09T14:07:25.3614616Z Downloading tqdm-4.67.1-py3-none-any.whl.metadata (57 kB) 2025-09-09T14:07:25.3615156Z Collecting importlib_metadata (from -r dev-requirements.txt (line 24)) 2025-09-09T14:07:25.3615749Z Downloading importlib_metadata-8.7.0-py3-none-any.whl.metadata (4.8 kB) 2025-09-09T14:07:25.3616301Z Collecting ninja (from -r dev-requirements.txt (line 27)) 2025-09-09T14:07:25.3616935Z Downloading ninja-1.13.0-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (5.1 kB) 2025-09-09T14:07:25.3617600Z Collecting cmake<4.0.0,>=3.19.0 (from -r dev-requirements.txt (line 30)) 2025-09-09T14:07:25.3618259Z Downloading cmake-3.31.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.3 kB) 2025-09-09T14:07:25.3618892Z Collecting ruff==0.11.6 (from -r dev-requirements.txt (line 33)) 2025-09-09T14:07:25.3619529Z Downloading ruff-0.11.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (25 kB) 2025-09-09T14:07:25.3620158Z Collecting pre-commit (from -r dev-requirements.txt (line 34)) 2025-09-09T14:07:25.3620697Z Downloading pre_commit-4.3.0-py2.py3-none-any.whl.metadata (1.2 kB) 2025-09-09T14:07:25.3621308Z Collecting exceptiongroup>=1 (from pytest->-r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3621916Z Downloading exceptiongroup-1.3.0-py3-none-any.whl.metadata (6.7 kB) 2025-09-09T14:07:25.3622503Z Collecting iniconfig>=1 (from pytest->-r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3623046Z Downloading iniconfig-2.1.0-py3-none-any.whl.metadata (2.7 kB) 2025-09-09T14:07:25.3623593Z Collecting pluggy<2,>=1.5 (from pytest->-r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3624118Z Downloading pluggy-1.6.0-py3-none-any.whl.metadata (4.8 kB) 2025-09-09T14:07:25.3624665Z Collecting pygments>=2.7.2 (from pytest->-r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3625228Z Downloading pygments-2.19.2-py3-none-any.whl.metadata (2.5 kB) 2025-09-09T14:07:25.3625748Z Collecting tomli>=1 (from pytest->-r dev-requirements.txt (line 2)) 2025-09-09T14:07:25.3626261Z Downloading tomli-2.2.1-py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:25.3626813Z Collecting lxml (from unittest-xml-reporting->-r dev-requirements.txt (line 3)) 2025-09-09T14:07:25.3627530Z Downloading lxml-6.0.1-cp39-cp39-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl.metadata (3.8 kB) 2025-09-09T14:07:25.3628583Z Requirement already satisfied: filelock in /opt/conda/envs/venv/lib/python3.9/site-packages (from transformers->-r dev-requirements.txt (line 6)) (3.19.1) 2025-09-09T14:07:25.3629593Z Collecting huggingface-hub<1.0,>=0.34.0 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3630248Z Downloading huggingface_hub-0.34.4-py3-none-any.whl.metadata (14 kB) 2025-09-09T14:07:25.3630828Z Collecting numpy>=1.17 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3631520Z Downloading numpy-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB) 2025-09-09T14:07:25.3632198Z Collecting pyyaml>=5.1 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3632896Z Downloading PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB) 2025-09-09T14:07:25.3633611Z Collecting regex!=2019.12.17 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3634648Z Downloading regex-2025.9.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (40 kB) 2025-09-09T14:07:25.3635523Z Collecting requests (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3636081Z Downloading requests-2.32.5-py3-none-any.whl.metadata (4.9 kB) 2025-09-09T14:07:25.3636686Z Collecting tokenizers<=0.23.0,>=0.22.0 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3637455Z Downloading tokenizers-0.22.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.8 kB) 2025-09-09T14:07:25.3638262Z Collecting safetensors>=0.4.3 (from transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3639018Z Downloading safetensors-0.6.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.1 kB) 2025-09-09T14:07:25.3640235Z Requirement already satisfied: fsspec>=2023.5.0 in /opt/conda/envs/venv/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.34.0->transformers->-r dev-requirements.txt (line 6)) (2025.7.0) 2025-09-09T14:07:25.3641849Z Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/envs/venv/lib/python3.9/site-packages (from huggingface-hub<1.0,>=0.34.0->transformers->-r dev-requirements.txt (line 6)) (4.14.1) 2025-09-09T14:07:25.3643084Z Collecting hf-xet<2.0.0,>=1.1.3 (from huggingface-hub<1.0,>=0.34.0->transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:25.3643871Z Downloading hf_xet-1.1.9-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.7 kB) 2025-09-09T14:07:25.3644558Z Collecting attrs>=22.2.0 (from hypothesis->-r dev-requirements.txt (line 7)) 2025-09-09T14:07:25.3645093Z Downloading attrs-25.3.0-py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:25.3645699Z Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis->-r dev-requirements.txt (line 7)) 2025-09-09T14:07:25.3646392Z Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:25.3647440Z Requirement already satisfied: torch<3,>=2.2 in /opt/conda/envs/venv/lib/python3.9/site-packages (from bitsandbytes->-r dev-requirements.txt (line 12)) (2.9.0.dev20250825+cpu) 2025-09-09T14:07:25.3648870Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/venv/lib/python3.9/site-packages (from torch<3,>=2.2->bitsandbytes->-r dev-requirements.txt (line 12)) (1.14.0) 2025-09-09T14:07:25.3650292Z Requirement already satisfied: networkx>=2.5.1 in /opt/conda/envs/venv/lib/python3.9/site-packages (from torch<3,>=2.2->bitsandbytes->-r dev-requirements.txt (line 12)) (3.2.1) 2025-09-09T14:07:25.3651688Z Requirement already satisfied: jinja2 in /opt/conda/envs/venv/lib/python3.9/site-packages (from torch<3,>=2.2->bitsandbytes->-r dev-requirements.txt (line 12)) (3.1.6) 2025-09-09T14:07:25.3652699Z Collecting contourpy>=1.0.1 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:25.3653422Z Downloading contourpy-1.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.4 kB) 2025-09-09T14:07:25.3654145Z Collecting cycler>=0.10 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:25.3654709Z Downloading cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB) 2025-09-09T14:07:25.3655271Z Collecting fonttools>=4.22.0 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6691899Z Downloading fonttools-4.59.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (109 kB) 2025-09-09T14:07:29.6692694Z Collecting kiwisolver>=1.3.1 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6693477Z Downloading kiwisolver-1.4.7-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (6.3 kB) 2025-09-09T14:07:29.6694191Z Collecting pillow>=8 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6694888Z Downloading pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (9.0 kB) 2025-09-09T14:07:29.6695607Z Collecting pyparsing>=2.3.1 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6696405Z Downloading pyparsing-3.2.3-py3-none-any.whl.metadata (5.0 kB) 2025-09-09T14:07:29.6697023Z Collecting python-dateutil>=2.7 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6697692Z Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) 2025-09-09T14:07:29.6698395Z Collecting importlib-resources>=3.2.0 (from matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:29.6699069Z Downloading importlib_resources-6.5.2-py3-none-any.whl.metadata (3.9 kB) 2025-09-09T14:07:29.6699782Z Collecting pytz>=2020.1 (from pandas->-r dev-requirements.txt (line 14)) 2025-09-09T14:07:29.6700331Z Downloading pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB) 2025-09-09T14:07:29.6700872Z Collecting tzdata>=2022.7 (from pandas->-r dev-requirements.txt (line 14)) 2025-09-09T14:07:29.6701452Z Downloading tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB) 2025-09-09T14:07:29.6702003Z Collecting termcolor (from fire->-r dev-requirements.txt (line 15)) 2025-09-09T14:07:29.6702558Z Downloading termcolor-3.1.0-py3-none-any.whl.metadata (6.4 kB) 2025-09-09T14:07:29.6703138Z Collecting pycryptodomex>=3.8 (from blobfile->-r dev-requirements.txt (line 18)) 2025-09-09T14:07:29.6703907Z Downloading pycryptodomex-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.4 kB) 2025-09-09T14:07:29.6704657Z Collecting urllib3<3,>=1.25.3 (from blobfile->-r dev-requirements.txt (line 18)) 2025-09-09T14:07:29.6705207Z Downloading urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB) 2025-09-09T14:07:29.6705780Z Collecting accelerate>=0.26.0 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6706391Z Downloading accelerate-1.10.1-py3-none-any.whl.metadata (19 kB) 2025-09-09T14:07:29.6706935Z Collecting evaluate (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6707480Z Downloading evaluate-0.4.5-py3-none-any.whl.metadata (9.5 kB) 2025-09-09T14:07:29.6708039Z Collecting datasets<4.0,>=2.16.0 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6708603Z Downloading datasets-3.6.0-py3-none-any.whl.metadata (19 kB) 2025-09-09T14:07:29.6709152Z Collecting jsonlines (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6709698Z Downloading jsonlines-4.0.0-py3-none-any.whl.metadata (1.6 kB) 2025-09-09T14:07:29.6710433Z Collecting numexpr (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6711104Z Downloading numexpr-2.10.2-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (8.1 kB) 2025-09-09T14:07:29.6711796Z Collecting peft>=0.2.0 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6712302Z Downloading peft-0.17.1-py3-none-any.whl.metadata (14 kB) 2025-09-09T14:07:29.6712840Z Collecting pybind11>=2.6.2 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6713403Z Downloading pybind11-3.0.1-py3-none-any.whl.metadata (10.0 kB) 2025-09-09T14:07:29.6713965Z Collecting pytablewriter (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6714565Z Downloading pytablewriter-1.2.1-py3-none-any.whl.metadata (38 kB) 2025-09-09T14:07:29.6715203Z Collecting rouge-score>=0.0.4 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6715711Z Downloading rouge_score-0.1.2.tar.gz (17 kB) 2025-09-09T14:07:29.6716308Z Preparing metadata (setup.py) ... [?25l- done 2025-09-09T14:07:29.6716904Z [?25hCollecting sacrebleu>=1.5.0 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6717502Z Downloading sacrebleu-2.5.1-py3-none-any.whl.metadata (51 kB) 2025-09-09T14:07:29.6718067Z Collecting scikit-learn>=0.24.1 (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6718802Z Downloading scikit_learn-1.6.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB) 2025-09-09T14:07:29.6719506Z Collecting sqlitedict (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6719991Z Downloading sqlitedict-2.1.0.tar.gz (21 kB) 2025-09-09T14:07:29.6720543Z Preparing metadata (setup.py) ... [?25l- done 2025-09-09T14:07:29.6721166Z [?25hCollecting tqdm-multiprocess (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6721823Z Downloading tqdm_multiprocess-0.0.11-py3-none-any.whl.metadata (5.7 kB) 2025-09-09T14:07:29.6722408Z Collecting zstandard (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6723125Z Downloading zstandard-0.24.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (3.1 kB) 2025-09-09T14:07:29.6723888Z Collecting dill (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6724394Z Downloading dill-0.4.0-py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:29.6724915Z Collecting word2number (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6725396Z Downloading word2number-1.1.zip (9.7 kB) 2025-09-09T14:07:29.6725825Z Preparing metadata (setup.py) ... [?25l- done 2025-09-09T14:07:29.6726405Z [?25hCollecting more_itertools (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6727009Z Downloading more_itertools-10.8.0-py3-none-any.whl.metadata (39 kB) 2025-09-09T14:07:29.6727648Z Collecting pyarrow>=15.0.0 (from datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6728337Z Downloading pyarrow-21.0.0-cp39-cp39-manylinux_2_28_x86_64.whl.metadata (3.3 kB) 2025-09-09T14:07:29.6728909Z Collecting dill (from lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6729409Z Downloading dill-0.3.8-py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:29.6729985Z Collecting xxhash (from datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6730709Z Downloading xxhash-3.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB) 2025-09-09T14:07:29.6731499Z Collecting multiprocess<0.70.17 (from datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6732186Z Downloading multiprocess-0.70.16-py39-none-any.whl.metadata (7.2 kB) 2025-09-09T14:07:29.6732904Z Collecting fsspec>=2023.5.0 (from huggingface-hub<1.0,>=0.34.0->transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:29.6733589Z Downloading fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB) 2025-09-09T14:07:29.6734331Z Collecting aiohttp!=4.0.0a0,!=4.0.0a1 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6735229Z Downloading aiohttp-3.12.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.7 kB) 2025-09-09T14:07:29.6735948Z Collecting zipp>=3.20 (from importlib_metadata->-r dev-requirements.txt (line 24)) 2025-09-09T14:07:29.6736522Z Downloading zipp-3.23.0-py3-none-any.whl.metadata (3.6 kB) 2025-09-09T14:07:29.6737052Z Collecting cfgv>=2.0.0 (from pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:29.6737588Z Downloading cfgv-3.4.0-py2.py3-none-any.whl.metadata (8.5 kB) 2025-09-09T14:07:29.6738157Z Collecting identify>=1.0.0 (from pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:29.6738736Z Downloading identify-2.6.14-py2.py3-none-any.whl.metadata (4.4 kB) 2025-09-09T14:07:29.6739341Z Collecting nodeenv>=0.11.1 (from pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:29.6739923Z Downloading nodeenv-1.9.1-py2.py3-none-any.whl.metadata (21 kB) 2025-09-09T14:07:29.6740503Z Collecting virtualenv>=20.10.0 (from pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:29.6741112Z Downloading virtualenv-20.34.0-py3-none-any.whl.metadata (4.6 kB) 2025-09-09T14:07:29.6741731Z Collecting psutil (from accelerate>=0.26.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6742634Z Downloading psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (22 kB) 2025-09-09T14:07:29.6743832Z Collecting aiohappyeyeballs>=2.5.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6744741Z Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB) 2025-09-09T14:07:29.6745605Z Collecting aiosignal>=1.4.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6746410Z Downloading aiosignal-1.4.0-py3-none-any.whl.metadata (3.7 kB) 2025-09-09T14:07:29.6747255Z Collecting async-timeout<6.0,>=4.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6748165Z Downloading async_timeout-5.0.1-py3-none-any.whl.metadata (5.1 kB) 2025-09-09T14:07:29.6748987Z Collecting frozenlist>=1.1.1 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6750116Z Downloading frozenlist-1.7.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (18 kB) 2025-09-09T14:07:29.6751342Z Collecting multidict<7.0,>=4.5 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6752394Z Downloading multidict-6.6.4-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (5.3 kB) 2025-09-09T14:07:29.6753437Z Collecting propcache>=0.2.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:29.6754390Z Downloading propcache-0.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB) 2025-09-09T14:07:29.6755410Z Collecting yarl<2.0,>=1.17.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4114606Z Downloading yarl-1.20.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (73 kB) 2025-09-09T14:07:31.4115707Z Collecting idna>=2.0 (from yarl<2.0,>=1.17.0->aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets<4.0,>=2.16.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4116508Z Downloading idna-3.10-py3-none-any.whl.metadata (10 kB) 2025-09-09T14:07:31.4117102Z Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib->-r dev-requirements.txt (line 13)) 2025-09-09T14:07:31.4117746Z Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) 2025-09-09T14:07:31.4118426Z Collecting charset_normalizer<4,>=2 (from requests->transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:31.4119358Z Downloading charset_normalizer-3.4.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (36 kB) 2025-09-09T14:07:31.4120274Z Collecting certifi>=2017.4.17 (from requests->transformers->-r dev-requirements.txt (line 6)) 2025-09-09T14:07:31.4120916Z Downloading certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB) 2025-09-09T14:07:31.4121532Z Collecting absl-py (from rouge-score>=0.0.4->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4122121Z Downloading absl_py-2.3.1-py3-none-any.whl.metadata (3.3 kB) 2025-09-09T14:07:31.4122697Z Collecting nltk (from rouge-score>=0.0.4->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4123278Z Downloading nltk-3.9.1-py3-none-any.whl.metadata (2.9 kB) 2025-09-09T14:07:31.4123866Z Collecting portalocker (from sacrebleu>=1.5.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4124518Z Downloading portalocker-3.2.0-py3-none-any.whl.metadata (8.7 kB) 2025-09-09T14:07:31.4125131Z Collecting colorama (from sacrebleu>=1.5.0->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4125760Z Downloading colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB) 2025-09-09T14:07:31.4126396Z Collecting scipy>=1.6.0 (from scikit-learn>=0.24.1->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4127337Z Downloading scipy-1.13.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB) 2025-09-09T14:07:31.4128107Z Collecting joblib>=1.2.0 (from scikit-learn>=0.24.1->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4128723Z Downloading joblib-1.5.2-py3-none-any.whl.metadata (5.6 kB) 2025-09-09T14:07:31.4129386Z Collecting threadpoolctl>=3.1.0 (from scikit-learn>=0.24.1->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4130073Z Downloading threadpoolctl-3.6.0-py3-none-any.whl.metadata (13 kB) 2025-09-09T14:07:31.4131251Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/venv/lib/python3.9/site-packages (from sympy>=1.13.3->torch<3,>=2.2->bitsandbytes->-r dev-requirements.txt (line 12)) (1.3.0) 2025-09-09T14:07:31.4132395Z Collecting distlib<1,>=0.3.7 (from virtualenv>=20.10.0->pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:31.4133059Z Downloading distlib-0.4.0-py2.py3-none-any.whl.metadata (5.2 kB) 2025-09-09T14:07:31.4133755Z Collecting platformdirs<5,>=3.9.1 (from virtualenv>=20.10.0->pre-commit->-r dev-requirements.txt (line 34)) 2025-09-09T14:07:31.4134447Z Downloading platformdirs-4.4.0-py3-none-any.whl.metadata (12 kB) 2025-09-09T14:07:31.4135486Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/venv/lib/python3.9/site-packages (from jinja2->torch<3,>=2.2->bitsandbytes->-r dev-requirements.txt (line 12)) (3.0.2) 2025-09-09T14:07:31.4136597Z Collecting click (from nltk->rouge-score>=0.0.4->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4137204Z Downloading click-8.1.8-py3-none-any.whl.metadata (2.3 kB) 2025-09-09T14:07:31.4138192Z Requirement already satisfied: setuptools>=38.3.0 in /opt/conda/envs/venv/lib/python3.9/site-packages (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) (78.1.1) 2025-09-09T14:07:31.4139308Z Collecting DataProperty<2,>=1.1.0 (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4139974Z Downloading DataProperty-1.1.0-py3-none-any.whl.metadata (11 kB) 2025-09-09T14:07:31.4140639Z Collecting mbstrdecoder<2,>=1.0.0 (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4141301Z Downloading mbstrdecoder-1.1.4-py3-none-any.whl.metadata (4.3 kB) 2025-09-09T14:07:31.4141974Z Collecting pathvalidate<4,>=2.3.0 (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4142633Z Downloading pathvalidate-3.3.1-py3-none-any.whl.metadata (12 kB) 2025-09-09T14:07:31.4143285Z Collecting tabledata<2,>=1.3.1 (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4143931Z Downloading tabledata-1.3.4-py3-none-any.whl.metadata (3.7 kB) 2025-09-09T14:07:31.4144548Z Collecting tcolorpy<1,>=0.0.5 (from pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4145186Z Downloading tcolorpy-0.1.7-py3-none-any.whl.metadata (6.3 kB) 2025-09-09T14:07:31.4145880Z Collecting typepy<2,>=1.3.2 (from typepy[datetime]<2,>=1.3.2->pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4146582Z Downloading typepy-1.3.4-py3-none-any.whl.metadata (9.2 kB) 2025-09-09T14:07:31.4147270Z Collecting chardet<6,>=3.0.4 (from mbstrdecoder<2,>=1.0.0->pytablewriter->lm_eval->-r dev-requirements.txt (line 19)) 2025-09-09T14:07:31.4147957Z Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB) 2025-09-09T14:07:31.4148570Z Downloading ruff-0.11.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.5 MB) 2025-09-09T14:07:31.4149520Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/11.5 MB ? eta -:--:-- 2025-09-09T14:07:31.4150210Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.5/11.5 MB 153.6 MB/s 0:00:00 2025-09-09T14:07:31.4150974Z [?25hDownloading cmake-3.31.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27.8 MB) 2025-09-09T14:07:31.4151826Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/27.8 MB ? eta -:--:-- 2025-09-09T14:07:31.4152550Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 27.8/27.8 MB 170.4 MB/s eta 0:00:01 2025-09-09T14:07:31.4153248Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 27.8/27.8 MB 130.1 MB/s 0:00:00 2025-09-09T14:07:31.4153819Z [?25hDownloading pytest-8.4.2-py3-none-any.whl (365 kB) 2025-09-09T14:07:31.4154240Z Downloading pluggy-1.6.0-py3-none-any.whl (20 kB) 2025-09-09T14:07:31.4154902Z Downloading unittest_xml_reporting-3.2.0-py2.py3-none-any.whl (20 kB) 2025-09-09T14:07:31.4155464Z Downloading parameterized-0.9.0-py2.py3-none-any.whl (20 kB) 2025-09-09T14:07:31.4155929Z Downloading packaging-25.0-py3-none-any.whl (66 kB) 2025-09-09T14:07:31.4156392Z Downloading transformers-4.56.1-py3-none-any.whl (11.6 MB) 2025-09-09T14:07:31.4157022Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/11.6 MB ? eta -:--:-- 2025-09-09T14:07:31.4157713Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.6/11.6 MB 294.3 MB/s 0:00:00 2025-09-09T14:07:31.4158329Z [?25hDownloading huggingface_hub-0.34.4-py3-none-any.whl (561 kB) 2025-09-09T14:07:31.4158971Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/561.5 kB ? eta -:--:-- 2025-09-09T14:07:31.4159653Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 561.5/561.5 kB 68.9 MB/s 0:00:00 2025-09-09T14:07:31.4160402Z [?25hDownloading hf_xet-1.1.9-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB) 2025-09-09T14:07:31.4161154Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/3.2 MB ? eta -:--:-- 2025-09-09T14:07:31.4171817Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 230.7 MB/s 0:00:00 2025-09-09T14:07:31.4172659Z [?25hDownloading tokenizers-0.22.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.3 MB) 2025-09-09T14:07:31.4173472Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/3.3 MB ? eta -:--:-- 2025-09-09T14:07:31.4174140Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 242.3 MB/s 0:00:00 2025-09-09T14:07:31.4174734Z [?25hDownloading hypothesis-6.138.15-py3-none-any.whl (533 kB) 2025-09-09T14:07:31.4175385Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/533.6 kB ? eta -:--:-- 2025-09-09T14:07:31.4176078Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 533.6/533.6 kB 67.9 MB/s 0:00:00 2025-09-09T14:07:31.4176726Z [?25hDownloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB) 2025-09-09T14:07:32.3925261Z Downloading sentencepiece-0.2.1-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (1.4 MB) 2025-09-09T14:07:32.3926579Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.4 MB ? eta -:--:-- 2025-09-09T14:07:32.3927253Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 134.3 MB/s 0:00:00 2025-09-09T14:07:32.3927848Z [?25hDownloading expecttest-0.3.0-py3-none-any.whl (8.2 kB) 2025-09-09T14:07:32.3928444Z Downloading bitsandbytes-0.47.0-py3-none-manylinux_2_24_x86_64.whl (61.3 MB) 2025-09-09T14:07:32.3929148Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/61.3 MB ? eta -:--:-- 2025-09-09T14:07:32.3930036Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━ 52.4/61.3 MB 264.8 MB/s eta 0:00:01 2025-09-09T14:07:32.3930762Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.3/61.3 MB 179.0 MB/s 0:00:00 2025-09-09T14:07:32.3931535Z [?25hDownloading matplotlib-3.9.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.3 MB) 2025-09-09T14:07:32.3932344Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/8.3 MB ? eta -:--:-- 2025-09-09T14:07:32.3932990Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.3/8.3 MB 275.6 MB/s 0:00:00 2025-09-09T14:07:32.3933743Z [?25hDownloading pandas-2.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.4 MB) 2025-09-09T14:07:32.3934516Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/12.4 MB ? eta -:--:-- 2025-09-09T14:07:32.3935174Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.4/12.4 MB 254.4 MB/s 0:00:00 2025-09-09T14:07:32.3935738Z [?25hDownloading fire-0.7.1-py3-none-any.whl (115 kB) 2025-09-09T14:07:32.3936151Z Downloading tabulate-0.9.0-py3-none-any.whl (35 kB) 2025-09-09T14:07:32.3936736Z Downloading tiktoken-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB) 2025-09-09T14:07:32.3937515Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 MB ? eta -:--:-- 2025-09-09T14:07:32.3938161Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 125.6 MB/s 0:00:00 2025-09-09T14:07:32.3938721Z [?25hDownloading blobfile-3.1.0-py3-none-any.whl (75 kB) 2025-09-09T14:07:32.3939148Z Downloading urllib3-2.5.0-py3-none-any.whl (129 kB) 2025-09-09T14:07:32.3939575Z Downloading lm_eval-0.4.9.1-py3-none-any.whl (7.5 MB) 2025-09-09T14:07:32.3940159Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/7.5 MB ? eta -:--:-- 2025-09-09T14:07:32.3940829Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.5/7.5 MB 176.6 MB/s 0:00:00 2025-09-09T14:07:32.3941391Z [?25hDownloading datasets-3.6.0-py3-none-any.whl (491 kB) 2025-09-09T14:07:32.3941809Z Downloading dill-0.3.8-py3-none-any.whl (116 kB) 2025-09-09T14:07:32.3942227Z Downloading fsspec-2025.3.0-py3-none-any.whl (193 kB) 2025-09-09T14:07:32.3942747Z Downloading multiprocess-0.70.16-py39-none-any.whl (133 kB) 2025-09-09T14:07:32.3943221Z Downloading diskcache-5.6.3-py3-none-any.whl (45 kB) 2025-09-09T14:07:32.3943822Z Downloading pycocotools-2.0.10-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (453 kB) 2025-09-09T14:07:32.3944420Z Downloading tqdm-4.67.1-py3-none-any.whl (78 kB) 2025-09-09T14:07:32.3944876Z Downloading importlib_metadata-8.7.0-py3-none-any.whl (27 kB) 2025-09-09T14:07:32.3945480Z Downloading ninja-1.13.0-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (180 kB) 2025-09-09T14:07:32.3946146Z Downloading pre_commit-4.3.0-py2.py3-none-any.whl (220 kB) 2025-09-09T14:07:32.3946603Z Downloading accelerate-1.10.1-py3-none-any.whl (374 kB) 2025-09-09T14:07:32.3947183Z Downloading numpy-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.5 MB) 2025-09-09T14:07:32.3947934Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/19.5 MB ? eta -:--:-- 2025-09-09T14:07:32.3948613Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.5/19.5 MB 244.8 MB/s 0:00:00 2025-09-09T14:07:32.3949383Z [?25hDownloading aiohttp-3.12.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB) 2025-09-09T14:07:32.3950144Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.6 MB ? eta -:--:-- 2025-09-09T14:07:32.3950800Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 150.2 MB/s 0:00:00 2025-09-09T14:07:32.3951404Z [?25hDownloading async_timeout-5.0.1-py3-none-any.whl (6.2 kB) 2025-09-09T14:07:32.3952111Z Downloading multidict-6.6.4-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (239 kB) 2025-09-09T14:07:32.3952924Z Downloading yarl-1.20.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (327 kB) 2025-09-09T14:07:32.3953521Z Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB) 2025-09-09T14:07:32.3954009Z Downloading aiosignal-1.4.0-py3-none-any.whl (7.5 kB) 2025-09-09T14:07:32.3954419Z Downloading attrs-25.3.0-py3-none-any.whl (63 kB) 2025-09-09T14:07:32.3954928Z Downloading cfgv-3.4.0-py2.py3-none-any.whl (7.2 kB) 2025-09-09T14:07:32.3955520Z Downloading contourpy-1.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (321 kB) 2025-09-09T14:07:32.3956100Z Downloading cycler-0.12.1-py3-none-any.whl (8.3 kB) 2025-09-09T14:07:32.3956520Z Downloading evaluate-0.4.5-py3-none-any.whl (84 kB) 2025-09-09T14:07:32.3957123Z Downloading exceptiongroup-1.3.0-py3-none-any.whl (16 kB) 2025-09-09T14:07:32.3957764Z Downloading fonttools-4.59.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (4.8 MB) 2025-09-09T14:07:32.3958556Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/4.8 MB ? eta -:--:-- 2025-09-09T14:07:32.3959219Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.8/4.8 MB 233.4 MB/s 0:00:00 2025-09-09T14:07:32.3960272Z [?25hDownloading frozenlist-1.7.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (225 kB) 2025-09-09T14:07:32.3961483Z Downloading identify-2.6.14-py2.py3-none-any.whl (99 kB) 2025-09-09T14:07:32.3962165Z Downloading idna-3.10-py3-none-any.whl (70 kB) 2025-09-09T14:07:32.3962889Z Downloading importlib_resources-6.5.2-py3-none-any.whl (37 kB) 2025-09-09T14:07:32.3963739Z Downloading iniconfig-2.1.0-py3-none-any.whl (6.0 kB) 2025-09-09T14:07:32.3964667Z Downloading kiwisolver-1.4.7-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.6 MB) 2025-09-09T14:07:32.3966276Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.6 MB ? eta -:--:-- 2025-09-09T14:07:32.3967429Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 162.1 MB/s 0:00:00 2025-09-09T14:07:32.3968653Z [?25hDownloading lxml-6.0.1-cp39-cp39-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl (5.3 MB) 2025-09-09T14:07:32.3970067Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/5.3 MB ? eta -:--:-- 2025-09-09T14:07:32.3971303Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.3/5.3 MB 203.2 MB/s 0:00:00 2025-09-09T14:07:32.3972395Z [?25hDownloading nodeenv-1.9.1-py2.py3-none-any.whl (22 kB) 2025-09-09T14:07:32.3973235Z Downloading peft-0.17.1-py3-none-any.whl (504 kB) 2025-09-09T14:07:32.3974265Z Downloading pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (6.6 MB) 2025-09-09T14:07:32.3975587Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/6.6 MB ? eta -:--:-- 2025-09-09T14:07:32.3976703Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 MB 236.0 MB/s 0:00:00 2025-09-09T14:07:34.3401073Z [?25hDownloading propcache-0.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (200 kB) 2025-09-09T14:07:34.3401867Z Downloading pyarrow-21.0.0-cp39-cp39-manylinux_2_28_x86_64.whl (42.7 MB) 2025-09-09T14:07:34.3402613Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/42.7 MB ? eta -:--:-- 2025-09-09T14:07:34.3403292Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.7/42.7 MB 230.1 MB/s 0:00:00 2025-09-09T14:07:34.3403876Z [?25hDownloading pybind11-3.0.1-py3-none-any.whl (293 kB) 2025-09-09T14:07:34.3404506Z Downloading pycryptodomex-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB) 2025-09-09T14:07:34.3405332Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/2.3 MB ? eta -:--:-- 2025-09-09T14:07:34.3405996Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 182.6 MB/s 0:00:00 2025-09-09T14:07:34.3406556Z [?25hDownloading pygments-2.19.2-py3-none-any.whl (1.2 MB) 2025-09-09T14:07:34.3407180Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 MB ? eta -:--:-- 2025-09-09T14:07:34.3407844Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 145.5 MB/s 0:00:00 2025-09-09T14:07:34.3408424Z [?25hDownloading pyparsing-3.2.3-py3-none-any.whl (111 kB) 2025-09-09T14:07:34.3408948Z Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) 2025-09-09T14:07:34.3409469Z Downloading pytz-2025.2-py2.py3-none-any.whl (509 kB) 2025-09-09T14:07:34.3410330Z Downloading PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (737 kB) 2025-09-09T14:07:34.3411324Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/737.4 kB ? eta -:--:-- 2025-09-09T14:07:34.3412012Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 737.4/737.4 kB 85.7 MB/s 0:00:00 2025-09-09T14:07:34.3412880Z [?25hDownloading regex-2025.9.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (789 kB) 2025-09-09T14:07:34.3413866Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/789.5 kB ? eta -:--:-- 2025-09-09T14:07:34.3414590Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 789.5/789.5 kB 101.9 MB/s 0:00:00 2025-09-09T14:07:34.3415160Z [?25hDownloading requests-2.32.5-py3-none-any.whl (64 kB) 2025-09-09T14:07:34.3415907Z Downloading charset_normalizer-3.4.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (152 kB) 2025-09-09T14:07:34.3416646Z Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) 2025-09-09T14:07:34.3417092Z Downloading sacrebleu-2.5.1-py3-none-any.whl (104 kB) 2025-09-09T14:07:34.3417693Z Downloading safetensors-0.6.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (485 kB) 2025-09-09T14:07:34.3418459Z Downloading scikit_learn-1.6.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB) 2025-09-09T14:07:34.3419255Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/13.5 MB ? eta -:--:-- 2025-09-09T14:07:34.3419939Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 237.6 MB/s 0:00:00 2025-09-09T14:07:34.3420491Z [?25hDownloading joblib-1.5.2-py3-none-any.whl (308 kB) 2025-09-09T14:07:34.3421069Z Downloading scipy-1.13.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.6 MB) 2025-09-09T14:07:34.3421811Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/38.6 MB ? eta -:--:-- 2025-09-09T14:07:34.3422489Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.6/38.6 MB 284.7 MB/s 0:00:00 2025-09-09T14:07:34.3423055Z [?25hDownloading six-1.17.0-py2.py3-none-any.whl (11 kB) 2025-09-09T14:07:34.3423506Z Downloading threadpoolctl-3.6.0-py3-none-any.whl (18 kB) 2025-09-09T14:07:34.3423943Z Downloading tomli-2.2.1-py3-none-any.whl (14 kB) 2025-09-09T14:07:34.3424355Z Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB) 2025-09-09T14:07:34.3424820Z Downloading virtualenv-20.34.0-py3-none-any.whl (6.0 MB) 2025-09-09T14:07:34.3425423Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/6.0 MB ? eta -:--:-- 2025-09-09T14:07:34.3426083Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.0/6.0 MB 239.6 MB/s 0:00:00 2025-09-09T14:07:34.3426666Z [?25hDownloading distlib-0.4.0-py2.py3-none-any.whl (469 kB) 2025-09-09T14:07:34.3427132Z Downloading platformdirs-4.4.0-py3-none-any.whl (18 kB) 2025-09-09T14:07:34.3427581Z Downloading zipp-3.23.0-py3-none-any.whl (10 kB) 2025-09-09T14:07:34.3427973Z Downloading absl_py-2.3.1-py3-none-any.whl (135 kB) 2025-09-09T14:07:34.3428403Z Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 2025-09-09T14:07:34.3428835Z Downloading jsonlines-4.0.0-py3-none-any.whl (8.7 kB) 2025-09-09T14:07:34.3429292Z Downloading more_itertools-10.8.0-py3-none-any.whl (69 kB) 2025-09-09T14:07:34.3429716Z Downloading nltk-3.9.1-py3-none-any.whl (1.5 MB) 2025-09-09T14:07:34.3430375Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.5 MB ? eta -:--:-- 2025-09-09T14:07:34.3431040Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 166.2 MB/s 0:00:00 2025-09-09T14:07:34.3431567Z [?25hDownloading click-8.1.8-py3-none-any.whl (98 kB) 2025-09-09T14:07:34.3432150Z Downloading numexpr-2.10.2-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (396 kB) 2025-09-09T14:07:34.3432747Z Downloading portalocker-3.2.0-py3-none-any.whl (22 kB) 2025-09-09T14:07:34.3433573Z Downloading psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (277 kB) 2025-09-09T14:07:34.3434338Z Downloading pytablewriter-1.2.1-py3-none-any.whl (91 kB) 2025-09-09T14:07:34.3434892Z Downloading DataProperty-1.1.0-py3-none-any.whl (27 kB) 2025-09-09T14:07:34.3435365Z Downloading mbstrdecoder-1.1.4-py3-none-any.whl (7.9 kB) 2025-09-09T14:07:34.3435805Z Downloading chardet-5.2.0-py3-none-any.whl (199 kB) 2025-09-09T14:07:34.3436246Z Downloading pathvalidate-3.3.1-py3-none-any.whl (24 kB) 2025-09-09T14:07:34.3436679Z Downloading tabledata-1.3.4-py3-none-any.whl (11 kB) 2025-09-09T14:07:34.3437112Z Downloading tcolorpy-0.1.7-py3-none-any.whl (8.1 kB) 2025-09-09T14:07:34.3437531Z Downloading typepy-1.3.4-py3-none-any.whl (31 kB) 2025-09-09T14:07:34.3437943Z Downloading termcolor-3.1.0-py3-none-any.whl (7.7 kB) 2025-09-09T14:07:34.3438415Z Downloading tqdm_multiprocess-0.0.11-py3-none-any.whl (9.8 kB) 2025-09-09T14:07:34.3439020Z Downloading xxhash-3.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (193 kB) 2025-09-09T14:07:34.3439772Z Downloading zstandard-0.24.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (5.6 MB) 2025-09-09T14:07:34.3440558Z [?25l ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/5.6 MB ? eta -:--:-- 2025-09-09T14:07:34.3441226Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6/5.6 MB 151.7 MB/s 0:00:00 2025-09-09T14:07:34.3441903Z [?25hBuilding wheels for collected packages: rouge-score, sqlitedict, word2number 2025-09-09T14:07:34.3444458Z  DEPRECATION: Building 'rouge-score' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'rouge-score'. Discussion can be found at https://github.com/pypa/pip/issues/6334 2025-09-09T14:07:34.3446661Z  Building wheel for rouge-score (setup.py) ... [?25l- done 2025-09-09T14:07:34.3447708Z [?25h Created wheel for rouge-score: filename=rouge_score-0.1.2-py3-none-any.whl size=24988 sha256=26d614ad8f149775ca38254ce41e314f331ed6c29925260839f1a70e041bf671 2025-09-09T14:07:34.3448874Z Stored in directory: /root/.cache/pip/wheels/9b/3d/39/09558097d3119ca0a4d462df68f22c6f3c1b345ac63a09b86e 2025-09-09T14:07:40.8833741Z  DEPRECATION: Building 'sqlitedict' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'sqlitedict'. Discussion can be found at https://github.com/pypa/pip/issues/6334 2025-09-09T14:07:40.8836399Z  Building wheel for sqlitedict (setup.py) ... [?25l- done 2025-09-09T14:07:40.8837463Z [?25h Created wheel for sqlitedict: filename=sqlitedict-2.1.0-py3-none-any.whl size=16958 sha256=afb4a4dc44f4c71a29b3b5c49b9e856aad0de9ae66fdbac483e326049a3d43c0 2025-09-09T14:07:40.8838589Z Stored in directory: /root/.cache/pip/wheels/f6/48/c4/942f7a1d556fddd2348cb9ac262f251873dfd8a39afec5678e 2025-09-09T14:07:40.8841269Z  DEPRECATION: Building 'word2number' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'word2number'. Discussion can be found at https://github.com/pypa/pip/issues/6334 2025-09-09T14:07:40.8843435Z  Building wheel for word2number (setup.py) ... [?25l- done 2025-09-09T14:07:40.8844512Z [?25h Created wheel for word2number: filename=word2number-1.1-py3-none-any.whl size=5658 sha256=2d3cb316ae36f4ae592fc6e79ca2876a8c6dae1604ad92cfb385a0409e78d0e0 2025-09-09T14:07:40.8845631Z Stored in directory: /root/.cache/pip/wheels/a0/4a/5b/d2f2df5c344ddbecb8bea759872c207ea91d93f57fb54e816e 2025-09-09T14:07:40.8846319Z Successfully built rouge-score sqlitedict word2number 2025-09-09T14:07:40.8851591Z Installing collected packages: word2number, sqlitedict, sortedcontainers, pytz, distlib, zstandard, zipp, xxhash, urllib3, tzdata, tqdm, tomli, threadpoolctl, termcolor, tcolorpy, tabulate, six, sentencepiece, safetensors, ruff, regex, pyyaml, pyparsing, pygments, pycryptodomex, pybind11, pyarrow, psutil, propcache, portalocker, pluggy, platformdirs, pillow, pathvalidate, parameterized, packaging, numpy, nodeenv, ninja, multidict, more_itertools, lxml, kiwisolver, joblib, iniconfig, idna, identify, hf-xet, fsspec, frozenlist, fonttools, expecttest, exceptiongroup, diskcache, dill, cycler, colorama, cmake, click, charset_normalizer, chardet, cfgv, certifi, attrs, async-timeout, aiohappyeyeballs, absl-py, yarl, virtualenv, unittest-xml-reporting, tqdm-multiprocess, scipy, sacrebleu, requests, python-dateutil, pytest, pycocotools, numexpr, nltk, multiprocess, mbstrdecoder, jsonlines, importlib-resources, importlib_metadata, hypothesis, fire, contourpy, blobfile, aiosignal, typepy, tiktoken, scikit-learn, rouge-score, pre-commit, pandas, matplotlib, huggingface-hub, bitsandbytes, aiohttp, tokenizers, accelerate, transformers, datasets, DataProperty, tabledata, peft, evaluate, pytablewriter, lm_eval 2025-09-09T14:07:40.8857043Z [?25l 2025-09-09T14:07:40.8857493Z  ━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  4/109 [distlib] 2025-09-09T14:07:40.8858119Z  ━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  8/109 [urllib3] 2025-09-09T14:07:40.8858731Z  ━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  10/109 [tqdm] 2025-09-09T14:07:40.8859339Z  ━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  19/109 [ruff] 2025-09-09T14:07:40.8859949Z  ━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  21/109 [pyyaml] 2025-09-09T14:07:40.8860568Z  ━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  23/109 [pygments] 2025-09-09T14:07:40.8861332Z  ━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  23/109 [pygments] 2025-09-09T14:07:40.8861949Z  ━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  23/109 [pygments] 2025-09-09T14:07:40.8862590Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  24/109 [pycryptodomex] 2025-09-09T14:07:40.8863246Z  ━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  24/109 [pycryptodomex] 2025-09-09T14:07:40.8863963Z  ━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  25/109 [pybind11] 2025-09-09T14:07:40.8864581Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8865188Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8865828Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8866443Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8867046Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8867676Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  26/109 [pyarrow] 2025-09-09T14:07:40.8868287Z  ━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  27/109 [psutil] 2025-09-09T14:07:40.8868908Z  ━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━  32/109 [pillow] 2025-09-09T14:07:40.8869546Z  ━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━  34/109 [parameterized] 2025-09-09T14:07:40.8870158Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8870794Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8871402Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8871995Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8872606Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8873205Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8873908Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8874522Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:40.8875202Z  ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━  36/109 [numpy] 2025-09-09T14:07:47.6820536Z  ━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━  41/109 [lxml] 2025-09-09T14:07:47.6821250Z  ━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━  43/109 [joblib] 2025-09-09T14:07:47.6821862Z  ━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━  45/109 [idna] 2025-09-09T14:07:47.6822341Z  Attempting uninstall: fsspec 2025-09-09T14:07:47.6822849Z ━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━  45/109 [idna] 2025-09-09T14:07:47.6823346Z  Found existing installation: fsspec 2025.7.0 2025-09-09T14:07:47.6823925Z ━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━  45/109 [idna] 2025-09-09T14:07:47.6824391Z  Uninstalling fsspec-2025.7.0: 2025-09-09T14:07:47.6824884Z ━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━  45/109 [idna] 2025-09-09T14:07:47.6825375Z  Successfully uninstalled fsspec-2025.7.0 2025-09-09T14:07:47.6825924Z ━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━  45/109 [idna] 2025-09-09T14:07:47.6826556Z  ━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━  48/109 [fsspec] 2025-09-09T14:07:47.6827181Z  ━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━  50/109 [fonttools] 2025-09-09T14:07:47.6827847Z  ━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━  50/109 [fonttools] 2025-09-09T14:07:47.6828469Z  ━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━  50/109 [fonttools] 2025-09-09T14:07:47.6829103Z  ━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━  50/109 [fonttools] 2025-09-09T14:07:47.6829716Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6830323Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6830985Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6831590Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6832180Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6832779Z  ━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━  57/109 [cmake] 2025-09-09T14:07:47.6833421Z  ━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━  59/109 [charset_normalizer] 2025-09-09T14:07:47.6834165Z  ━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━  62/109 [certifi] 2025-09-09T14:07:47.6834871Z  ━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━  68/109 [virtualenv] 2025-09-09T14:07:47.6835478Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6836109Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6836712Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6837297Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6837898Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6838509Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6839109Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6839705Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6840311Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6840907Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6841494Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6842088Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6842673Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6844000Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6844602Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6845187Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  71/109 [scipy] 2025-09-09T14:07:47.6845884Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━  72/109 [sacrebleu] 2025-09-09T14:07:47.6846517Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━  74/109 [python-dateutil] 2025-09-09T14:07:47.6847142Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━  75/109 [pytest] 2025-09-09T14:07:47.6847760Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━━  76/109 [pycocotools] 2025-09-09T14:07:47.6848355Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━  78/109 [nltk] 2025-09-09T14:07:47.6848970Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━  78/109 [nltk] 2025-09-09T14:07:47.6849549Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━  78/109 [nltk] 2025-09-09T14:07:55.2106221Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━  78/109 [nltk] 2025-09-09T14:07:55.2106965Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━━━  79/109 [multiprocess] 2025-09-09T14:07:55.2107624Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━  82/109 [importlib-resources] 2025-09-09T14:07:55.2108278Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━━  84/109 [hypothesis] 2025-09-09T14:07:55.2108872Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━  85/109 [fire] 2025-09-09T14:07:55.2109476Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━━━━  87/109 [blobfile] 2025-09-09T14:07:55.2110298Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2110918Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2111548Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2112377Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2113015Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2113649Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2114264Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2115118Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2115734Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2116364Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━  91/109 [scikit-learn] 2025-09-09T14:07:55.2117077Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2117963Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2118781Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2120106Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2121189Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2121823Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2122404Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2123001Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2123604Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2124202Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2124798Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2125381Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2125974Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2126678Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2127275Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2127869Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2128535Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2129133Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2129715Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2130309Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━  94/109 [pandas] 2025-09-09T14:07:55.2130926Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2131559Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2132178Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2132786Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2133427Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2134039Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━━  95/109 [matplotlib] 2025-09-09T14:07:55.2134657Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━  96/109 [huggingface-hub] 2025-09-09T14:07:55.2135299Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  97/109 [bitsandbytes] 2025-09-09T14:08:02.7078589Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  97/109 [bitsandbytes] 2025-09-09T14:08:02.7079312Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  97/109 [bitsandbytes] 2025-09-09T14:08:02.7079942Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  97/109 [bitsandbytes] 2025-09-09T14:08:02.7080839Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  97/109 [bitsandbytes] 2025-09-09T14:08:02.7081458Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━━  98/109 [aiohttp] 2025-09-09T14:08:02.7082077Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━━━ 100/109 [accelerate] 2025-09-09T14:08:02.7082687Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7083309Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7084061Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7084669Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7085294Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7085919Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7086538Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7087161Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7087763Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7088385Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7089013Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7089632Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7090273Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7090879Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7091500Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7092110Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7092734Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7093429Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7094037Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7094657Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7095362Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7095978Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7096602Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7097214Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7097835Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7098480Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7099091Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7099709Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7100337Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7100954Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7101574Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 101/109 [transformers] 2025-09-09T14:08:02.7102169Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━ 102/109 [datasets] 2025-09-09T14:08:02.7102782Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━ 105/109 [peft] 2025-09-09T14:08:02.7103355Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸━ 105/109 [peft] 2025-09-09T14:08:02.7103925Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:02.7104580Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:02.7105127Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:02.7105684Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8308482Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8309114Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8309668Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8310744Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8311298Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8311845Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8312428Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8312970Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8313521Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8314092Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8314631Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 108/109 [lm_eval] 2025-09-09T14:08:26.8315272Z  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 109/109 [lm_eval] 2025-09-09T14:08:26.8315718Z [?25h 2025-09-09T14:08:26.8324134Z Successfully installed DataProperty-1.1.0 absl-py-2.3.1 accelerate-1.10.1 aiohappyeyeballs-2.6.1 aiohttp-3.12.15 aiosignal-1.4.0 async-timeout-5.0.1 attrs-25.3.0 bitsandbytes-0.47.0 blobfile-3.1.0 certifi-2025.8.3 cfgv-3.4.0 chardet-5.2.0 charset_normalizer-3.4.3 click-8.1.8 cmake-3.31.6 colorama-0.4.6 contourpy-1.3.0 cycler-0.12.1 datasets-3.6.0 dill-0.3.8 diskcache-5.6.3 distlib-0.4.0 evaluate-0.4.5 exceptiongroup-1.3.0 expecttest-0.3.0 fire-0.7.1 fonttools-4.59.2 frozenlist-1.7.0 fsspec-2025.3.0 hf-xet-1.1.9 huggingface-hub-0.34.4 hypothesis-6.138.15 identify-2.6.14 idna-3.10 importlib-resources-6.5.2 importlib_metadata-8.7.0 iniconfig-2.1.0 joblib-1.5.2 jsonlines-4.0.0 kiwisolver-1.4.7 lm_eval-0.4.9.1 lxml-6.0.1 matplotlib-3.9.4 mbstrdecoder-1.1.4 more_itertools-10.8.0 multidict-6.6.4 multiprocess-0.70.16 ninja-1.13.0 nltk-3.9.1 nodeenv-1.9.1 numexpr-2.10.2 numpy-2.0.2 packaging-25.0 pandas-2.3.2 parameterized-0.9.0 pathvalidate-3.3.1 peft-0.17.1 pillow-11.3.0 platformdirs-4.4.0 pluggy-1.6.0 portalocker-3.2.0 pre-commit-4.3.0 propcache-0.3.2 psutil-7.0.0 pyarrow-21.0.0 pybind11-3.0.1 pycocotools-2.0.10 pycryptodomex-3.23.0 pygments-2.19.2 pyparsing-3.2.3 pytablewriter-1.2.1 pytest-8.4.2 python-dateutil-2.9.0.post0 pytz-2025.2 pyyaml-6.0.2 regex-2025.9.1 requests-2.32.5 rouge-score-0.1.2 ruff-0.11.6 sacrebleu-2.5.1 safetensors-0.6.2 scikit-learn-1.6.1 scipy-1.13.1 sentencepiece-0.2.1 six-1.17.0 sortedcontainers-2.4.0 sqlitedict-2.1.0 tabledata-1.3.4 tabulate-0.9.0 tcolorpy-0.1.7 termcolor-3.1.0 threadpoolctl-3.6.0 tiktoken-0.11.0 tokenizers-0.22.0 tomli-2.2.1 tqdm-4.67.1 tqdm-multiprocess-0.0.11 transformers-4.56.1 typepy-1.3.4 tzdata-2025.2 unittest-xml-reporting-3.2.0 urllib3-2.5.0 virtualenv-20.34.0 word2number-1.1 xxhash-3.5.0 yarl-1.20.1 zipp-3.23.0 zstandard-0.24.0 2025-09-09T14:08:26.8333007Z WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning. 2025-09-09T14:08:26.8334757Z + pip install . 2025-09-09T14:08:26.8335010Z Processing /pytorch/ao 2025-09-09T14:08:26.8335365Z Preparing metadata (setup.py) ... [?25l- done 2025-09-09T14:08:26.8335841Z [?25hBuilding wheels for collected packages: torchao 2025-09-09T14:08:26.8338225Z  DEPRECATION: Building 'torchao' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'torchao'. Discussion can be found at https://github.com/pypa/pip/issues/6334 2025-09-09T14:08:26.8340362Z  Building wheel for torchao (setup.py) ... [?25l- \ | / done 2025-09-09T14:08:26.8341481Z [?25h Created wheel for torchao: filename=torchao-0.14.0+git7c05f81-py3-none-any.whl size=1043958 sha256=5c079fd434f7b34cdd3144c5d31b4949967fc7203320736e75346b5ac8ac2460 2025-09-09T14:08:26.8342740Z Stored in directory: /tmp/pip-ephem-wheel-cache-y2mk05r0/wheels/4d/54/dc/0c70e60a8677bf126f1486798ebe76c8770ada66c7376b401d 2025-09-09T14:08:26.8343474Z Successfully built torchao 2025-09-09T14:08:26.8343776Z Installing collected packages: torchao 2025-09-09T14:08:26.8344135Z Successfully installed torchao-0.14.0+git7c05f81 2025-09-09T14:08:26.8346135Z WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning. 2025-09-09T14:08:26.8347880Z ++++ which conda 2025-09-09T14:08:26.8348134Z +++ dirname /opt/conda/condabin/conda 2025-09-09T14:08:26.8348454Z ++ dirname /opt/conda/condabin 2025-09-09T14:08:26.8348726Z + export CONDA=/opt/conda 2025-09-09T14:08:26.8348990Z + CONDA=/opt/conda 2025-09-09T14:08:26.8349536Z + export LD_LIBRARY_PATH=/opt/conda/lib/:/opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib: 2025-09-09T14:08:26.8350403Z + LD_LIBRARY_PATH=/opt/conda/lib/:/opt/rh/gcc-toolset-11/root/usr/lib64:/opt/rh/gcc-toolset-11/root/usr/lib: 2025-09-09T14:08:26.8350994Z + pytest test --verbose -s 2025-09-09T14:08:26.8351404Z ============================= test session starts ============================== 2025-09-09T14:08:26.8351996Z platform linux -- Python 3.9.23, pytest-8.4.2, pluggy-1.6.0 -- /opt/conda/envs/venv/bin/python3.9 2025-09-09T14:08:26.8352507Z cachedir: .pytest_cache 2025-09-09T14:08:26.8353147Z hypothesis profile 'ci' -> database=None, deadline=None, print_blob=True, derandomize=True, suppress_health_check=(HealthCheck.too_slow,) 2025-09-09T14:08:26.8353839Z rootdir: /pytorch/ao 2025-09-09T14:08:26.8354088Z plugins: hypothesis-6.138.15 2025-09-09T14:08:26.8354410Z collecting ...  2025-09-09T14:08:26.8355001Z collecting 0 items  2025-09-09T14:08:26.8355591Z collecting 26 items  2025-09-09T14:08:26.8356147Z collecting 26 items  2025-09-09T14:08:26.8356724Z collecting 526 items  2025-09-09T14:08:26.8357413Z collecting 1022 items / 3 skipped  2025-09-09T14:08:26.8358053Z collecting 1035 items / 6 skipped  2025-09-09T14:08:26.8359584Z collecting 2976 items / 14 skipped NOTE: Using slow Hadamard transform for SpinQuant. For better performance on GPU, install `fast_hadamard_transform`: `pip install git+https://github.com/Dao-AILab/fast-hadamard-transform.git` 2025-09-09T14:08:26.8360737Z  2025-09-09T14:08:26.8361168Z collecting 3962 items / 14 skipped  2025-09-09T14:08:26.8361821Z collecting 5658 items / 14 skipped  2025-09-09T14:08:26.8362455Z collected 6963 items / 14 skipped  2025-09-09T14:08:26.8362808Z 2025-09-09T14:08:26.8363240Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config0] PASSED 2025-09-09T14:08:26.8365545Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config1] PASSED 2025-09-09T14:08:26.8366401Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config2] PASSED 2025-09-09T14:08:26.8367230Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config3] PASSED 2025-09-09T14:08:26.8368039Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config4] PASSED 2025-09-09T14:08:26.8368962Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config5] PASSED 2025-09-09T14:08:26.9394259Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config6] PASSED 2025-09-09T14:08:26.9395203Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config7] PASSED 2025-09-09T14:08:26.9396035Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config8] PASSED 2025-09-09T14:08:26.9396847Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config9] PASSED 2025-09-09T14:08:26.9397680Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config10] PASSED 2025-09-09T14:08:26.9398590Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config11] PASSED 2025-09-09T14:08:26.9399486Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config12] PASSED 2025-09-09T14:08:26.9400326Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config13] PASSED 2025-09-09T14:08:26.9401164Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config14] PASSED 2025-09-09T14:08:26.9402099Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config15] PASSED 2025-09-09T14:08:26.9402930Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config16] PASSED 2025-09-09T14:08:26.9403770Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config17] PASSED 2025-09-09T14:08:26.9404601Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config18] PASSED 2025-09-09T14:08:26.9405453Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config19] PASSED 2025-09-09T14:08:26.9406284Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config20] PASSED 2025-09-09T14:08:26.9407146Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config21] PASSED 2025-09-09T14:08:26.9408316Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config22] PASSED 2025-09-09T14:08:26.9409042Z test/core/test_config.py::test_disallowed_modules PASSED 2025-09-09T14:08:26.9409618Z test/core/test_config.py::test_version_mismatch PASSED 2025-09-09T14:08:26.9410460Z test/core/test_config.py::test_default_version PASSED 2025-09-09T14:08:26.9411306Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant0 SKIPPED 2025-09-09T14:08:26.9412403Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant1 SKIPPED 2025-09-09T14:08:26.9413665Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant2 SKIPPED 2025-09-09T14:08:26.9414740Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant3 SKIPPED 2025-09-09T14:08:26.9415826Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant4 SKIPPED 2025-09-09T14:08:26.9416897Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant5 SKIPPED 2025-09-09T14:08:26.9418034Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_copy__mismatch_metadata_apply_quant6 SKIPPED 2025-09-09T14:08:26.9419121Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_print_quantized_module SKIPPED 2025-09-09T14:08:26.9420068Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_register_new_dispatch SKIPPED 2025-09-09T14:08:26.9421251Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_tensor_core_layout_transpose SKIPPED 2025-09-09T14:08:26.9422270Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant0 SKIPPED 2025-09-09T14:08:26.9423297Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant1 SKIPPED 2025-09-09T14:08:26.9424327Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant2 SKIPPED 2025-09-09T14:08:26.9425332Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant3 SKIPPED 2025-09-09T14:08:26.9426396Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant4 SKIPPED 2025-09-09T14:08:26.9427459Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant5 SKIPPED 2025-09-09T14:08:26.9428487Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_test_copy__apply_apply_quant6 SKIPPED 2025-09-09T14:08:26.9429610Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_to_affine_quantized_intx_static PASSED 2025-09-09T14:08:26.9430602Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_to_device_apply_quant0 SKIPPED 2025-09-09T14:08:26.9431568Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_to_device_apply_quant1 SKIPPED 2025-09-09T14:08:26.9432514Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_to_device_apply_quant2 SKIPPED 2025-09-09T14:08:26.9433474Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_to_device_apply_quant3 SKIPPED 2025-09-09T14:08:26.9434428Z test/dtypes/test_affine_quantized.py::TestAffineQuantized::test_weights_only SKIPPED 2025-09-09T14:08:26.9435505Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_alias_device_cpu_bfloat16 PASSED 2025-09-09T14:08:26.9436626Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_flatten_unflatten_device_cpu_bfloat16 PASSED 2025-09-09T14:08:26.9437755Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_matmul_device_cuda_bfloat16 PASSED 2025-09-09T14:08:26.9439755Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_mm_int4wo_device_cuda_bfloat16 SKIPPED 2025-09-09T14:08:26.9440943Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_slice_and_copy_int4wo_device_cuda_bfloat16 SKIPPED 2025-09-09T14:08:26.9442085Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_slice_gemlite_device_cuda_bfloat16 SKIPPED 2025-09-09T14:08:26.9443259Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_slice_gemlite_device_cuda_float16 SKIPPED 2025-09-09T14:08:26.9444417Z test/dtypes/test_affine_quantized.py::TestAffineQuantizedBasic::test_slice_int4wo_device_cuda_bfloat16 SKIPPED 2025-09-09T14:08:26.9445817Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_choose_scale_float8_bounds_float8_e4m3fn_bfloat16 SKIPPED 2025-09-09T14:08:26.9447282Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_choose_scale_float8_bounds_float8_e4m3fn_float32 SKIPPED 2025-09-09T14:08:26.9448658Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_choose_scale_float8_bounds_float8_e5m2_bfloat16 SKIPPED 2025-09-09T14:08:26.9450042Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_choose_scale_float8_bounds_float8_e5m2_float32 SKIPPED 2025-09-09T14:08:26.9451591Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_bfloat16_block_size0 SKIPPED 2025-09-09T14:08:26.9453107Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_bfloat16_block_size1 SKIPPED 2025-09-09T14:08:26.9454707Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_bfloat16_block_size2 SKIPPED 2025-09-09T14:08:26.9456237Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_bfloat16_block_size3 SKIPPED 2025-09-09T14:08:26.9457732Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_float32_block_size0 SKIPPED 2025-09-09T14:08:26.9459278Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_float32_block_size1 SKIPPED 2025-09-09T14:08:26.9460820Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_float32_block_size2 SKIPPED 2025-09-09T14:08:26.9462369Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e4m3fn_float32_block_size3 SKIPPED 2025-09-09T14:08:26.9463939Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_bfloat16_block_size0 SKIPPED 2025-09-09T14:08:26.9465434Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_bfloat16_block_size1 SKIPPED 2025-09-09T14:08:26.9466937Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_bfloat16_block_size2 SKIPPED 2025-09-09T14:08:26.9468534Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_bfloat16_block_size3 SKIPPED 2025-09-09T14:08:26.9470062Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_float32_block_size0 SKIPPED 2025-09-09T14:08:26.9471609Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_float32_block_size1 SKIPPED 2025-09-09T14:08:26.9802333Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_float32_block_size2 SKIPPED 2025-09-09T14:08:26.9803913Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_float8_e5m2_float32_block_size3 SKIPPED 2025-09-09T14:08:26.9805359Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_dequantize_affine_float8_scale_broadcasting SKIPPED 2025-09-09T14:08:26.9806923Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_expected_kernels_on_gpu_granularity0_float8_config_version_1 SKIPPED 2025-09-09T14:08:26.9808575Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_expected_kernels_on_gpu_granularity0_float8_config_version_2 SKIPPED 2025-09-09T14:08:26.9810362Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_expected_kernels_on_gpu_granularity1_float8_config_version_1 SKIPPED 2025-09-09T14:08:26.9811890Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_expected_kernels_on_gpu_granularity1_float8_config_version_2 SKIPPED 2025-09-09T14:08:26.9813317Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_basic_granularity0 SKIPPED 2025-09-09T14:08:26.9814715Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_basic_granularity1 SKIPPED 2025-09-09T14:08:26.9816076Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_edge_cases SKIPPED 2025-09-09T14:08:26.9817527Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_functional_correctness_granularity0 SKIPPED 2025-09-09T14:08:26.9819103Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_functional_correctness_granularity1 SKIPPED 2025-09-09T14:08:26.9820451Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_per_row SKIPPED 2025-09-09T14:08:26.9821695Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_float8_tensor_slicing_per_tensor SKIPPED 2025-09-09T14:08:26.9823162Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9824869Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9826596Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9828296Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9830011Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9831660Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9833347Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9835141Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9837030Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9838670Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9840313Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9842164Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9843802Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9845563Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9847201Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9848827Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_static_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9850629Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9852385Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9854155Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9855847Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9857597Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9859357Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9861161Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9862839Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9864505Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9866270Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9867957Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9869673Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9871416Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9873107Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:26.9874885Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:26.9876727Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:26.9878356Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:26.9879987Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:27.0182608Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:27.0184233Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:27.0185877Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:27.0187625Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:27.0189294Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:27.0190988Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_static_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:27.0192651Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_sizes0 SKIPPED 2025-09-09T14:08:27.0194316Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_sizes1 SKIPPED 2025-09-09T14:08:27.0196058Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_sizes0 SKIPPED 2025-09-09T14:08:27.0197736Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_sizes1 SKIPPED 2025-09-09T14:08:27.0199399Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_sizes0 SKIPPED 2025-09-09T14:08:27.0201074Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_sizes1 SKIPPED 2025-09-09T14:08:27.0202749Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_sizes0 SKIPPED 2025-09-09T14:08:27.0204406Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_sizes1 SKIPPED 2025-09-09T14:08:27.0206020Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_fp8_weight_dimension_warning SKIPPED 2025-09-09T14:08:27.0207227Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_invalid_granularity SKIPPED 2025-09-09T14:08:27.0208381Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mismatched_granularity SKIPPED 2025-09-09T14:08:27.0209824Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape0_bias_False SKIPPED 2025-09-09T14:08:27.0211722Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape0_bias_True SKIPPED 2025-09-09T14:08:27.0213366Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape1_bias_False SKIPPED 2025-09-09T14:08:27.0215021Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape1_bias_True SKIPPED 2025-09-09T14:08:27.0216664Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape2_bias_False SKIPPED 2025-09-09T14:08:27.0218312Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape2_bias_True SKIPPED 2025-09-09T14:08:27.0219961Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape3_bias_False SKIPPED 2025-09-09T14:08:27.0221595Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape3_bias_True SKIPPED 2025-09-09T14:08:27.0223237Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape4_bias_False SKIPPED 2025-09-09T14:08:27.0224864Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_1024_out_features_512_leading_shape4_bias_True SKIPPED 2025-09-09T14:08:27.0226483Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape0_bias_False SKIPPED 2025-09-09T14:08:27.0228121Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape0_bias_True SKIPPED 2025-09-09T14:08:27.0229839Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape1_bias_False SKIPPED 2025-09-09T14:08:27.0231543Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape1_bias_True SKIPPED 2025-09-09T14:08:27.0233300Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape2_bias_False SKIPPED 2025-09-09T14:08:27.0235053Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape2_bias_True SKIPPED 2025-09-09T14:08:27.0236694Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape3_bias_False SKIPPED 2025-09-09T14:08:27.0238540Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape3_bias_True SKIPPED 2025-09-09T14:08:27.0240196Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape4_bias_False SKIPPED 2025-09-09T14:08:27.0241946Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_256_out_features_768_leading_shape4_bias_True SKIPPED 2025-09-09T14:08:27.0243596Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape0_bias_False SKIPPED 2025-09-09T14:08:27.0245339Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape0_bias_True SKIPPED 2025-09-09T14:08:27.0247106Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape1_bias_False SKIPPED 2025-09-09T14:08:27.0248812Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape1_bias_True SKIPPED 2025-09-09T14:08:27.0250515Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape2_bias_False SKIPPED 2025-09-09T14:08:27.0252172Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape2_bias_True SKIPPED 2025-09-09T14:08:27.0253872Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape3_bias_False SKIPPED 2025-09-09T14:08:27.0255592Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape3_bias_True SKIPPED 2025-09-09T14:08:27.0257374Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape4_bias_False SKIPPED 2025-09-09T14:08:42.3233577Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_mm_float8dq_per_row_in_features_512_out_features_1024_leading_shape4_bias_True SKIPPED 2025-09-09T14:08:42.3235574Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_per_row_with_float32 SKIPPED 2025-09-09T14:08:42.3237199Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_preprocess_scale_3d_reshape PASSED 2025-09-09T14:08:42.3238837Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_serialization_mode_dynamic SKIPPED 2025-09-09T14:08:42.3240469Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_serialization_mode_static SKIPPED 2025-09-09T14:08:42.3242116Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_serialization_mode_weight-only SKIPPED 2025-09-09T14:08:42.3243741Z test/dtypes/test_affine_quantized_float.py::TestAffineQuantizedFloat8Compile::test_unsupported_granularity SKIPPED 2025-09-09T14:08:42.3245370Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestInt8woAffineQuantizedTensorParallel::test_tp_bfloat16 SKIPPED 2025-09-09T14:08:42.3247042Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestInt8woAffineQuantizedTensorParallel::test_tp_float16 SKIPPED 2025-09-09T14:08:42.3248701Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestInt8woAffineQuantizedTensorParallel::test_tp_float32 SKIPPED 2025-09-09T14:08:42.3250359Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestInt4woAffineQuantizedTensorParallel::test_tp_bfloat16 SKIPPED 2025-09-09T14:08:42.3253255Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestGemliteLayoutTensorParallel::test_tp_gemlite_float16 SKIPPED 2025-09-09T14:08:42.3254547Z test/dtypes/test_affine_quantized_tensor_parallel.py::TestInt8dqAffineQuantizedTensorParallel::test_tp_bfloat16 SKIPPED 2025-09-09T14:08:42.3255437Z test/dtypes/test_bitpacking.py::test_CPU[0-1] PASSED 2025-09-09T14:08:42.3256007Z test/dtypes/test_bitpacking.py::test_CPU[0-2] PASSED 2025-09-09T14:08:42.3256551Z test/dtypes/test_bitpacking.py::test_CPU[0-3] PASSED 2025-09-09T14:08:42.3257226Z test/dtypes/test_bitpacking.py::test_CPU[0-4] PASSED 2025-09-09T14:08:42.3257768Z test/dtypes/test_bitpacking.py::test_CPU[0-5] PASSED 2025-09-09T14:08:42.3258328Z test/dtypes/test_bitpacking.py::test_CPU[0-6] PASSED 2025-09-09T14:08:42.3258868Z test/dtypes/test_bitpacking.py::test_CPU[0-7] PASSED 2025-09-09T14:08:42.3259441Z test/dtypes/test_bitpacking.py::test_CPU[-1-1] PASSED 2025-09-09T14:08:42.3260039Z test/dtypes/test_bitpacking.py::test_CPU[-1-2] PASSED 2025-09-09T14:08:42.3260595Z test/dtypes/test_bitpacking.py::test_CPU[-1-3] PASSED 2025-09-09T14:08:42.3261157Z test/dtypes/test_bitpacking.py::test_CPU[-1-4] PASSED 2025-09-09T14:08:42.3261715Z test/dtypes/test_bitpacking.py::test_CPU[-1-5] PASSED 2025-09-09T14:08:42.3262264Z test/dtypes/test_bitpacking.py::test_CPU[-1-6] PASSED 2025-09-09T14:08:42.3262825Z test/dtypes/test_bitpacking.py::test_CPU[-1-7] PASSED 2025-09-09T14:08:42.3263375Z test/dtypes/test_bitpacking.py::test_CPU[1-1] PASSED 2025-09-09T14:08:42.3263934Z test/dtypes/test_bitpacking.py::test_CPU[1-2] PASSED 2025-09-09T14:08:42.3264479Z test/dtypes/test_bitpacking.py::test_CPU[1-3] PASSED 2025-09-09T14:08:42.3265036Z test/dtypes/test_bitpacking.py::test_CPU[1-4] PASSED 2025-09-09T14:08:42.3265594Z test/dtypes/test_bitpacking.py::test_CPU[1-5] PASSED 2025-09-09T14:08:42.3266138Z test/dtypes/test_bitpacking.py::test_CPU[1-6] PASSED 2025-09-09T14:08:42.3266695Z test/dtypes/test_bitpacking.py::test_CPU[1-7] PASSED 2025-09-09T14:08:42.3267336Z test/dtypes/test_bitpacking.py::test_GPU[0-1] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3268071Z test/dtypes/test_bitpacking.py::test_GPU[0-2] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3268784Z test/dtypes/test_bitpacking.py::test_GPU[0-3] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3269506Z test/dtypes/test_bitpacking.py::test_GPU[0-4] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3270222Z test/dtypes/test_bitpacking.py::test_GPU[0-5] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3270927Z test/dtypes/test_bitpacking.py::test_GPU[0-6] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3271652Z test/dtypes/test_bitpacking.py::test_GPU[0-7] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3272357Z test/dtypes/test_bitpacking.py::test_GPU[-1-1] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3273074Z test/dtypes/test_bitpacking.py::test_GPU[-1-2] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3273789Z test/dtypes/test_bitpacking.py::test_GPU[-1-3] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3274491Z test/dtypes/test_bitpacking.py::test_GPU[-1-4] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3275295Z test/dtypes/test_bitpacking.py::test_GPU[-1-5] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3275999Z test/dtypes/test_bitpacking.py::test_GPU[-1-6] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3276721Z test/dtypes/test_bitpacking.py::test_GPU[-1-7] SKIPPED (CUDA not ava...) 2025-09-09T14:08:42.3277429Z test/dtypes/test_bitpacking.py::test_GPU[1-1] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3278149Z test/dtypes/test_bitpacking.py::test_GPU[1-2] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3278960Z test/dtypes/test_bitpacking.py::test_GPU[1-3] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3279671Z test/dtypes/test_bitpacking.py::test_GPU[1-4] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3280396Z test/dtypes/test_bitpacking.py::test_GPU[1-5] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3281102Z test/dtypes/test_bitpacking.py::test_GPU[1-6] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3281819Z test/dtypes/test_bitpacking.py::test_GPU[1-7] SKIPPED (CUDA not avai...) 2025-09-09T14:08:42.3282548Z test/dtypes/test_bitpacking.py::test_compile[0-1] SKIPPED (unsupport...) 2025-09-09T14:08:42.3283335Z test/dtypes/test_bitpacking.py::test_compile[0-2] SKIPPED (unsupport...) 2025-09-09T14:08:42.3284058Z test/dtypes/test_bitpacking.py::test_compile[0-3] SKIPPED (unsupport...) 2025-09-09T14:08:42.3284770Z test/dtypes/test_bitpacking.py::test_compile[0-4] SKIPPED (unsupport...) 2025-09-09T14:08:42.3285497Z test/dtypes/test_bitpacking.py::test_compile[0-5] SKIPPED (unsupport...) 2025-09-09T14:08:42.3286215Z test/dtypes/test_bitpacking.py::test_compile[0-6] SKIPPED (unsupport...) 2025-09-09T14:08:42.3286945Z test/dtypes/test_bitpacking.py::test_compile[0-7] SKIPPED (unsupport...) 2025-09-09T14:08:42.3287672Z test/dtypes/test_bitpacking.py::test_compile[-1-1] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3288390Z test/dtypes/test_bitpacking.py::test_compile[-1-2] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3289121Z test/dtypes/test_bitpacking.py::test_compile[-1-3] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3289836Z test/dtypes/test_bitpacking.py::test_compile[-1-4] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3290561Z test/dtypes/test_bitpacking.py::test_compile[-1-5] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3291289Z test/dtypes/test_bitpacking.py::test_compile[-1-6] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3292000Z test/dtypes/test_bitpacking.py::test_compile[-1-7] SKIPPED (unsuppor...) 2025-09-09T14:08:42.3292735Z test/dtypes/test_bitpacking.py::test_compile[1-1] SKIPPED (unsupport...) 2025-09-09T14:08:42.3293455Z test/dtypes/test_bitpacking.py::test_compile[1-2] SKIPPED (unsupport...) 2025-09-09T14:08:42.3294187Z test/dtypes/test_bitpacking.py::test_compile[1-3] SKIPPED (unsupport...) 2025-09-09T14:08:42.3294898Z test/dtypes/test_bitpacking.py::test_compile[1-4] SKIPPED (unsupport...) 2025-09-09T14:08:42.3295647Z test/dtypes/test_bitpacking.py::test_compile[1-5] SKIPPED (unsupport...) 2025-09-09T14:08:42.3296635Z test/dtypes/test_bitpacking.py::test_compile[1-6] SKIPPED (unsupport...) 2025-09-09T14:08:42.3297589Z test/dtypes/test_bitpacking.py::test_compile[1-7] SKIPPED (unsupport...) 2025-09-09T14:08:42.3298551Z test/dtypes/test_bitpacking.py::test_pack_example SKIPPED (CUDA not ...) 2025-09-09T14:08:42.3299658Z test/dtypes/test_bitpacking.py::test_pack_example_CPU tensor([ 0, 105, 151, 37], dtype=torch.uint8) tensor([ 39, 146], dtype=torch.uint8) 2025-09-09T14:08:42.3300572Z PASSED 2025-09-09T14:08:42.3301626Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_2_mbits_2_bias_False_bfloat16 SKIPPED 2025-09-09T14:08:42.3303331Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_2_mbits_2_bias_False_float16 SKIPPED 2025-09-09T14:08:42.3305040Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_2_mbits_2_bias_True_bfloat16 SKIPPED 2025-09-09T14:08:42.3306735Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_2_mbits_2_bias_True_float16 SKIPPED 2025-09-09T14:08:42.3308451Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_3_mbits_2_bias_False_bfloat16 SKIPPED 2025-09-09T14:08:42.3310490Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_3_mbits_2_bias_False_float16 SKIPPED 2025-09-09T14:08:42.3312199Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_3_mbits_2_bias_True_bfloat16 SKIPPED 2025-09-09T14:08:42.3313907Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_fpx_weight_only_ebits_3_mbits_2_bias_True_float16 SKIPPED 2025-09-09T14:08:42.3315709Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_from_scaled_tc_floatx_compile_ebits_2_mbits_2_device_cpu PASSED 2025-09-09T14:08:42.3317601Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_from_scaled_tc_floatx_compile_ebits_3_mbits_2_device_cpu PASSED 2025-09-09T14:09:26.0678736Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_from_tc_floatx_correctness_ebits_2_mbits_2_device_cpu PASSED 2025-09-09T14:09:26.0680528Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_from_tc_floatx_correctness_ebits_3_mbits_2_device_cpu PASSED 2025-09-09T14:09:26.0682188Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_pack_tc_fp6_correctness_device_cpu PASSED 2025-09-09T14:09:26.0683687Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_to_copy_device_ebits_2_mbits_2 SKIPPED 2025-09-09T14:09:26.0685184Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_to_copy_device_ebits_3_mbits_2 SKIPPED 2025-09-09T14:09:26.0686913Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_to_scaled_tc_floatx_compile_ebits_2_mbits_2_device_cpu PASSED 2025-09-09T14:09:26.0688674Z test/dtypes/test_floatx.py::TestFloatxTensorCoreAQTTensorImpl::test_to_scaled_tc_floatx_compile_ebits_3_mbits_2_device_cpu PASSED 2025-09-09T14:09:26.0689846Z test/dtypes/test_nf4.py::TestNF4Linear::test_backward_dtype_match_bfloat16 PASSED 2025-09-09T14:09:26.0690647Z test/dtypes/test_nf4.py::TestNF4Linear::test_backward_dtype_match_float16 PASSED 2025-09-09T14:09:26.0691429Z test/dtypes/test_nf4.py::TestNF4Linear::test_backward_dtype_match_float32 PASSED 2025-09-09T14:09:26.0692345Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape0_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0693354Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape0_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0694369Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape0_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0695388Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape1_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0696392Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape1_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0697404Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_bfloat16_shape1_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0698405Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape0_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0699411Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape0_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0700407Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape0_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0701881Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape1_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0703269Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape1_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0704828Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float16_shape1_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0706687Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape0_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0709258Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape0_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0711624Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape0_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0713468Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape1_chunk_size_16 SKIPPED 2025-09-09T14:09:26.0715362Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape1_chunk_size_32 SKIPPED 2025-09-09T14:09:26.0717131Z test/dtypes/test_nf4.py::TestNF4Linear::test_chunk_size_equivalence_float32_shape1_chunk_size_8 SKIPPED 2025-09-09T14:09:26.0718893Z test/dtypes/test_nf4.py::TestNF4Linear::test_empty_like_input_size0 SKIPPED 2025-09-09T14:09:26.0720238Z test/dtypes/test_nf4.py::TestNF4Linear::test_empty_like_input_size1 SKIPPED 2025-09-09T14:09:26.0721575Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_diff_meta_bfloat16 PASSED 2025-09-09T14:09:26.0722973Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_diff_meta_float16 PASSED 2025-09-09T14:09:26.0724330Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_diff_meta_float32 PASSED 2025-09-09T14:09:26.0725713Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_same_meta_bfloat16 PASSED 2025-09-09T14:09:26.0727094Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_same_meta_float16 PASSED 2025-09-09T14:09:26.0728508Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_nf4_same_meta_float32 PASSED 2025-09-09T14:09:26.0729918Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_state_dicts_bfloat16 SKIPPED 2025-09-09T14:09:26.0731321Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_state_dicts_float16 SKIPPED 2025-09-09T14:09:26.0732698Z test/dtypes/test_nf4.py::TestNF4Linear::test_load_from_state_dicts_float32 SKIPPED 2025-09-09T14:09:26.0734043Z test/dtypes/test_nf4.py::TestNF4Linear::test_nf4_bnb_linear_bfloat16 SKIPPED 2025-09-09T14:09:26.0735255Z test/dtypes/test_nf4.py::TestNF4Linear::test_nf4_bnb_linear_float16 SKIPPED 2025-09-09T14:09:26.0736645Z test/dtypes/test_nf4.py::TestNF4Linear::test_nf4_bnb_linear_float32 SKIPPED 2025-09-09T14:09:26.0738111Z test/dtypes/test_nf4.py::TestNF4Linear::test_output_dtype_match_bfloat16 PASSED 2025-09-09T14:09:26.0739596Z test/dtypes/test_nf4.py::TestNF4Linear::test_output_dtype_match_float16 PASSED 2025-09-09T14:09:26.0741064Z test/dtypes/test_nf4.py::TestNF4Linear::test_output_dtype_match_float32 PASSED 2025-09-09T14:09:26.0742589Z test/dtypes/test_nf4.py::TestNF4Linear::test_quantize_api_compile_False SKIPPED 2025-09-09T14:09:26.0744075Z test/dtypes/test_nf4.py::TestNF4Linear::test_quantize_api_compile_True SKIPPED 2025-09-09T14:09:26.0745698Z test/dtypes/test_nf4.py::TestNF4Linear::test_reconstruction_qlora_vs_bnb_bfloat16 SKIPPED 2025-09-09T14:09:26.0747410Z test/dtypes/test_nf4.py::TestNF4Linear::test_reconstruction_qlora_vs_bnb_float16 SKIPPED 2025-09-09T14:09:26.0749084Z test/dtypes/test_nf4.py::TestNF4Linear::test_reconstruction_qlora_vs_bnb_float32 SKIPPED 2025-09-09T14:09:26.0750686Z test/dtypes/test_nf4.py::TestNF4Linear::test_register_nf4_as_param_bfloat16 PASSED 2025-09-09T14:09:26.0752147Z test/dtypes/test_nf4.py::TestNF4Linear::test_register_nf4_as_param_float16 PASSED 2025-09-09T14:09:26.0753265Z test/dtypes/test_nf4.py::TestNF4Linear::test_register_nf4_as_param_float32 PASSED 2025-09-09T14:09:26.0754537Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_bfloat16 SKIPPED 2025-09-09T14:09:26.0755734Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_compile_bfloat16 SKIPPED 2025-09-09T14:09:26.0756875Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_compile_float16 SKIPPED 2025-09-09T14:09:26.0759226Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_compile_float32 SKIPPED 2025-09-09T14:09:26.0760344Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_float16 SKIPPED 2025-09-09T14:09:26.0761373Z test/dtypes/test_nf4.py::TestNF4Linear::test_smoketest_linear_float32 SKIPPED 2025-09-09T14:09:26.0762343Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_bfloat16 PASSED 2025-09-09T14:09:26.0763249Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_device SKIPPED 2025-09-09T14:09:26.0764137Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_float16 PASSED 2025-09-09T14:09:26.0765119Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_float32 PASSED 2025-09-09T14:09:26.0766011Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_dtype_bfloat16 PASSED 2025-09-09T14:09:26.0766922Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_dtype_float16 PASSED 2025-09-09T14:09:26.0767835Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_dtype_float32 PASSED 2025-09-09T14:09:26.0768754Z test/dtypes/test_nf4.py::TestFSDPOps::test_pin_memory SKIPPED (Need ...) 2025-09-09T14:09:26.0769775Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_2d_view_valid_input_size0 PASSED 2025-09-09T14:09:26.0770872Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_as_strided_invalid_input_size0 PASSED 2025-09-09T14:09:26.0772012Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_as_strided_invalid_input_size1 PASSED 2025-09-09T14:09:26.0773120Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_as_strided_valid_input_size1 PASSED 2025-09-09T14:09:26.0774234Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_as_strided_valid_input_size2 PASSED 2025-09-09T14:09:26.0775388Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_as_strided_valid_input_size_262144 PASSED 2025-09-09T14:09:26.0776479Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_deepcopy_input_size1 SKIPPED 2025-09-09T14:09:26.0777516Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_deepcopy_input_size2 SKIPPED 2025-09-09T14:09:26.0778568Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_deepcopy_input_size_262144 SKIPPED 2025-09-09T14:09:26.0779685Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_invalid_input_size1 PASSED 2025-09-09T14:09:26.0780818Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_invalid_input_size2 PASSED 2025-09-09T14:09:26.0781972Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_invalid_input_size_262144 PASSED 2025-09-09T14:09:26.0783114Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_valid_input_size1 PASSED 2025-09-09T14:09:26.0784189Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_valid_input_size2 PASSED 2025-09-09T14:09:26.0785325Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_new_zeros_valid_input_size_262144 PASSED 2025-09-09T14:09:26.0786394Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_slice_1d_invalid PASSED 2025-09-09T14:09:35.0580162Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_slice_2d_invalid PASSED 2025-09-09T14:09:35.0581154Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_slice_valid_input_size1 PASSED 2025-09-09T14:09:35.0582014Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_slice_valid_input_size2 PASSED 2025-09-09T14:09:35.0582838Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_slice_valid_input_size_262144 PASSED 2025-09-09T14:09:35.0583852Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_view_invalid_input_size0 PASSED 2025-09-09T14:09:35.0584753Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_view_valid_input_size0 PASSED 2025-09-09T14:09:35.0585612Z test/dtypes/test_nf4.py::TestFSDPOps::test_tensor_view_valid_input_size1 PASSED 2025-09-09T14:09:35.0586462Z test/dtypes/test_nf4.py::TestFSDPOps::test_to_cpu SKIPPED (Need CUDA...) 2025-09-09T14:09:35.0587510Z test/dtypes/test_nf4.py::TestFSDPOps::test_to_cuda SKIPPED (Need CUD...) 2025-09-09T14:09:35.0588351Z test/dtypes/test_nf4.py::TestFSDPOps::test_to_module SKIPPED (Need C...) 2025-09-09T14:09:35.0589187Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_invalid_3d_input_size0 PASSED 2025-09-09T14:09:35.0590101Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_invalid_divide_input_size1 PASSED 2025-09-09T14:09:35.0591159Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_invalid_divide_input_size2 PASSED 2025-09-09T14:09:35.0592283Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_invalid_divide_input_size_261632 PASSED 2025-09-09T14:09:35.0593147Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_valid_input_size1 PASSED 2025-09-09T14:09:35.0594092Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_valid_input_size2 PASSED 2025-09-09T14:09:35.0594964Z test/dtypes/test_nf4.py::TestFSDPOps::test_torch_chunk_valid_input_size_262144 PASSED 2025-09-09T14:09:35.0596334Z test/dtypes/test_nf4.py::TestQLoRA::test_qlora_fsdp2 I0909 14:09:27.075322 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 0 with pid 515 2025-09-09T14:09:35.0597588Z I0909 14:09:27.083043 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 1 with pid 516 2025-09-09T14:09:35.0598452Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:09:35.0599247Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:09:35.0599802Z dist init r=0, world=2 2025-09-09T14:09:35.0600040Z dist init r=1, world=2 2025-09-09T14:09:35.0600352Z SKIPPED (Need a...) 2025-09-09T14:09:35.0601262Z test/dtypes/test_nf4.py::TestComm::test_comm I0909 14:09:31.004211 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 0 with pid 555 2025-09-09T14:09:35.0602542Z I0909 14:09:31.012979 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 1 with pid 556 2025-09-09T14:09:35.0603400Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:09:35.0604155Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:09:35.0604675Z dist init r=0, world=2 2025-09-09T14:09:35.0604935Z dist init r=1, world=2 2025-09-09T14:09:35.0605255Z SKIPPED (Need at least ...) 2025-09-09T14:09:35.0605821Z test/dtypes/test_uint4.py::TestUInt4::test_basic_tensor_ops SKIPPED 2025-09-09T14:09:35.0606686Z test/dtypes/test_uint4.py::TestUInt4::test_gpu_quant SKIPPED (FAILED...) 2025-09-09T14:09:35.0607482Z test/dtypes/test_uint4.py::TestUInt4::test_pt2e_quant SKIPPED (FAILE...) 2025-09-09T14:09:35.0608256Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype0] SKIPPED 2025-09-09T14:09:35.0609101Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype1] SKIPPED 2025-09-09T14:09:35.0610099Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype2] SKIPPED 2025-09-09T14:09:35.0611209Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype3] SKIPPED 2025-09-09T14:09:35.0612133Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype4] SKIPPED 2025-09-09T14:09:35.0612995Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype5] SKIPPED 2025-09-09T14:09:35.0613992Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[32-dtype6] SKIPPED 2025-09-09T14:09:35.0614892Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype0] SKIPPED 2025-09-09T14:09:35.0615854Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype1] SKIPPED 2025-09-09T14:09:35.0616919Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype2] SKIPPED 2025-09-09T14:09:35.0617783Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype3] SKIPPED 2025-09-09T14:09:35.0618812Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype4] SKIPPED 2025-09-09T14:09:35.0619713Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype5] SKIPPED 2025-09-09T14:09:35.0620548Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[64-dtype6] SKIPPED 2025-09-09T14:09:35.0621666Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype0] SKIPPED 2025-09-09T14:09:35.0622590Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype1] SKIPPED 2025-09-09T14:09:35.0623564Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype2] SKIPPED 2025-09-09T14:09:35.0624528Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype3] SKIPPED 2025-09-09T14:09:35.0625378Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype4] SKIPPED 2025-09-09T14:09:35.0626401Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype5] SKIPPED 2025-09-09T14:09:35.0627319Z test/dtypes/test_uintx.py::test_uintx_quant_on_cpu_then_move_to_cuda[128-dtype6] SKIPPED 2025-09-09T14:09:35.0628183Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype0] SKIPPED 2025-09-09T14:09:35.0629168Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype1] SKIPPED 2025-09-09T14:09:35.0630060Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype2] SKIPPED 2025-09-09T14:09:35.0630936Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype3] SKIPPED 2025-09-09T14:09:35.0640904Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype4] SKIPPED 2025-09-09T14:09:35.0642163Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype5] SKIPPED 2025-09-09T14:09:35.0643076Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-32-dtype6] SKIPPED 2025-09-09T14:09:35.0644051Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype0] SKIPPED 2025-09-09T14:09:35.0644994Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype1] SKIPPED 2025-09-09T14:09:35.0645834Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype2] SKIPPED 2025-09-09T14:09:35.0646859Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype3] SKIPPED 2025-09-09T14:09:35.0647747Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype4] SKIPPED 2025-09-09T14:09:35.0648570Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype5] SKIPPED 2025-09-09T14:09:35.0649587Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-64-dtype6] SKIPPED 2025-09-09T14:09:35.0650501Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype0] SKIPPED 2025-09-09T14:09:35.0651370Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype1] SKIPPED 2025-09-09T14:09:35.0652348Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype2] SKIPPED 2025-09-09T14:09:35.0653259Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype3] SKIPPED 2025-09-09T14:09:35.0654220Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype4] SKIPPED 2025-09-09T14:09:35.0655128Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype5] SKIPPED 2025-09-09T14:09:35.0656027Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cpu-128-dtype6] SKIPPED 2025-09-09T14:09:35.0657149Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype0] SKIPPED 2025-09-09T14:09:35.0658079Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype1] SKIPPED 2025-09-09T14:09:35.0658903Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype2] SKIPPED 2025-09-09T14:09:35.0659930Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype3] SKIPPED 2025-09-09T14:09:35.0660832Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype4] SKIPPED 2025-09-09T14:09:35.0661796Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype5] SKIPPED 2025-09-09T14:09:35.0662752Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-32-dtype6] SKIPPED 2025-09-09T14:09:35.0663647Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype0] SKIPPED 2025-09-09T14:09:35.0664615Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype1] SKIPPED 2025-09-09T14:09:35.0665547Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype2] SKIPPED 2025-09-09T14:09:35.0666410Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype3] SKIPPED 2025-09-09T14:09:35.0667245Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype4] SKIPPED 2025-09-09T14:09:35.0668061Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype5] SKIPPED 2025-09-09T14:09:36.1377037Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-64-dtype6] SKIPPED 2025-09-09T14:09:36.1377961Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype0] SKIPPED 2025-09-09T14:09:36.1378817Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype1] SKIPPED 2025-09-09T14:09:36.1379650Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype2] SKIPPED 2025-09-09T14:09:36.1380522Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype3] SKIPPED 2025-09-09T14:09:36.1381357Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype4] SKIPPED 2025-09-09T14:09:36.1382183Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype5] SKIPPED 2025-09-09T14:09:36.1383107Z test/dtypes/test_uintx.py::test_uintx_weight_only_model_quant[cuda-128-dtype6] SKIPPED 2025-09-09T14:09:36.1383970Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype0] SKIPPED 2025-09-09T14:09:36.1384832Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype1] SKIPPED 2025-09-09T14:09:36.1385595Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype2] SKIPPED 2025-09-09T14:09:36.1386426Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype3] SKIPPED 2025-09-09T14:09:36.1387260Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype4] SKIPPED 2025-09-09T14:09:36.1388132Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype5] SKIPPED 2025-09-09T14:09:36.1388894Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-32-dtype6] SKIPPED 2025-09-09T14:09:36.1389658Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype0] SKIPPED 2025-09-09T14:09:36.1390410Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype1] SKIPPED 2025-09-09T14:09:36.1391211Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype2] SKIPPED 2025-09-09T14:09:36.1391970Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype3] SKIPPED 2025-09-09T14:09:36.1392735Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype4] SKIPPED 2025-09-09T14:09:36.1393490Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype5] SKIPPED 2025-09-09T14:09:36.1394506Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-64-dtype6] SKIPPED 2025-09-09T14:09:36.1395510Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype0] SKIPPED 2025-09-09T14:09:36.1396360Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype1] SKIPPED 2025-09-09T14:09:36.1397142Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype2] SKIPPED 2025-09-09T14:09:36.1397898Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype3] SKIPPED 2025-09-09T14:09:36.1398949Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype4] SKIPPED 2025-09-09T14:09:36.1399794Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype5] SKIPPED 2025-09-09T14:09:36.1400554Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cpu-128-dtype6] SKIPPED 2025-09-09T14:09:36.1401328Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype0] SKIPPED 2025-09-09T14:09:36.1402092Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype1] SKIPPED 2025-09-09T14:09:36.1402867Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype2] SKIPPED 2025-09-09T14:09:36.1403627Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype3] SKIPPED 2025-09-09T14:09:36.1404392Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype4] SKIPPED 2025-09-09T14:09:36.1405163Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype5] SKIPPED 2025-09-09T14:09:36.1405925Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-32-dtype6] SKIPPED 2025-09-09T14:09:36.1406843Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype0] SKIPPED 2025-09-09T14:09:36.1407677Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype1] SKIPPED 2025-09-09T14:09:36.1408454Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype2] SKIPPED 2025-09-09T14:09:36.1409220Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype3] SKIPPED 2025-09-09T14:09:36.1410194Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype4] SKIPPED 2025-09-09T14:09:36.1411087Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype5] SKIPPED 2025-09-09T14:09:36.1411914Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-64-dtype6] SKIPPED 2025-09-09T14:09:36.1412693Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype0] SKIPPED 2025-09-09T14:09:36.1413487Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype1] SKIPPED 2025-09-09T14:09:36.1414254Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype2] SKIPPED 2025-09-09T14:09:36.1415037Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype3] SKIPPED 2025-09-09T14:09:36.1415804Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype4] SKIPPED 2025-09-09T14:09:36.1416587Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype5] SKIPPED 2025-09-09T14:09:36.1417357Z test/dtypes/test_uintx.py::test_uintx_weight_only_quant[cuda-128-dtype6] SKIPPED 2025-09-09T14:09:36.1418188Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype0] SKIPPED (...) 2025-09-09T14:09:36.1418968Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype1] SKIPPED (...) 2025-09-09T14:09:36.1419752Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype2] SKIPPED (...) 2025-09-09T14:09:36.1420457Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype3] SKIPPED (...) 2025-09-09T14:09:36.1421145Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype4] SKIPPED (...) 2025-09-09T14:09:36.1421997Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype5] SKIPPED (...) 2025-09-09T14:09:36.1422946Z test/dtypes/test_uintx.py::test_uintx_target_dtype[dtype6] SKIPPED (...) 2025-09-09T14:09:36.1423670Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype0] SKIPPED 2025-09-09T14:09:36.1424402Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype1] SKIPPED 2025-09-09T14:09:36.1425122Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype2] SKIPPED 2025-09-09T14:09:36.1425854Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype3] SKIPPED 2025-09-09T14:09:36.1426572Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype4] SKIPPED 2025-09-09T14:09:36.1427938Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype5] SKIPPED 2025-09-09T14:09:36.1428671Z test/dtypes/test_uintx.py::test_uintx_target_dtype_compile[dtype6] SKIPPED 2025-09-09T14:09:36.1429380Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype0] SKIPPED (Ne...) 2025-09-09T14:09:36.1430247Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype1] SKIPPED (Ne...) 2025-09-09T14:09:36.1431011Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype2] SKIPPED (Ne...) 2025-09-09T14:09:36.1431722Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype3] SKIPPED (Ne...) 2025-09-09T14:09:36.1432427Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype4] SKIPPED (Ne...) 2025-09-09T14:09:36.1433115Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype5] SKIPPED (Ne...) 2025-09-09T14:09:36.1433973Z test/dtypes/test_uintx.py::test_uintx_model_size[dtype6] SKIPPED (Ne...) 2025-09-09T14:09:36.1435011Z test/float8/test_auto_filter.py::test_end_to_end_filtering[tensorwise-module_dims0-valid.layer-filter_fqns0-True] PASSED 2025-09-09T14:09:36.1436307Z test/float8/test_auto_filter.py::test_end_to_end_filtering[tensorwise-module_dims1-skip_layer.linear-filter_fqns1-False] PASSED 2025-09-09T14:09:36.1437518Z test/float8/test_auto_filter.py::test_end_to_end_filtering[tensorwise-module_dims2-valid.layer-filter_fqns2-False] PASSED 2025-09-09T14:09:36.1438658Z test/float8/test_auto_filter.py::test_end_to_end_filtering[rowwise-module_dims3-valid.layer-filter_fqns3-True] PASSED 2025-09-09T14:09:36.1439829Z test/float8/test_auto_filter.py::test_end_to_end_filtering[rowwise-module_dims4-skip_layer.linear-filter_fqns4-False] PASSED 2025-09-09T14:09:36.1441028Z test/float8/test_auto_filter.py::test_end_to_end_filtering[rowwise-module_dims5-valid.layer-filter_fqns5-False] PASSED 2025-09-09T14:09:36.1442057Z test/float8/test_auto_filter.py::test_exact_boundary_dimensions_rowwise PASSED 2025-09-09T14:09:36.1442926Z test/float8/test_auto_filter.py::test_exact_boundary_dimensions_tensorwise PASSED 2025-09-09T14:09:36.1443648Z test/float8/test_auto_filter.py::test_partial_fqn_matching PASSED 2025-09-09T14:09:36.1444401Z test/float8/test_base.py::TestFloat8TrainingTensor::test_preserves_dtype PASSED 2025-09-09T14:09:36.1445390Z test/float8/test_base.py::TestFloat8TrainingTensor::test_differentiable_casts PASSED 2025-09-09T14:09:36.1446271Z test/float8/test_base.py::TestFloat8TrainingTensor::test_split_cat PASSED 2025-09-09T14:09:36.1447016Z test/float8/test_base.py::TestFloat8TrainingTensor::test_index_put PASSED 2025-09-09T14:09:36.1447731Z test/float8/test_base.py::TestFloat8TrainingTensor::test_copy_ PASSED 2025-09-09T14:09:36.1448456Z test/float8/test_base.py::TestFloat8TrainingTensor::test_transpose PASSED 2025-09-09T14:09:36.1449338Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True-0-shape0] PASSED 2025-09-09T14:09:36.1450348Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True-0-shape1] PASSED 2025-09-09T14:09:36.1451344Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True-0-shape2] PASSED 2025-09-09T14:09:36.1887139Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True--1-shape0] PASSED 2025-09-09T14:09:36.1888194Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True--1-shape1] PASSED 2025-09-09T14:09:36.1889198Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[True--1-shape2] PASSED 2025-09-09T14:09:36.1890218Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False-0-shape0] PASSED 2025-09-09T14:09:36.1891315Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False-0-shape1] PASSED 2025-09-09T14:09:36.1892572Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False-0-shape2] PASSED 2025-09-09T14:09:36.1893599Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False--1-shape0] PASSED 2025-09-09T14:09:36.1894761Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False--1-shape1] PASSED 2025-09-09T14:09:36.1895867Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_dynamic_cast[False--1-shape2] PASSED 2025-09-09T14:09:36.1896785Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_reshape PASSED 2025-09-09T14:09:36.1897983Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.AXISWISE-a_shape0] SKIPPED 2025-09-09T14:09:36.1899535Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.AXISWISE-a_shape1] SKIPPED 2025-09-09T14:09:36.1901068Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.AXISWISE-a_shape2] SKIPPED 2025-09-09T14:09:36.1902598Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.TENSORWISE-a_shape0] SKIPPED 2025-09-09T14:09:36.1904164Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.TENSORWISE-a_shape1] SKIPPED 2025-09-09T14:09:36.1905916Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.AXISWISE-ScalingGranularity.TENSORWISE-a_shape2] SKIPPED 2025-09-09T14:09:36.1907499Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.TENSORWISE-ScalingGranularity.AXISWISE-a_shape0] SKIPPED 2025-09-09T14:09:36.1909276Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.TENSORWISE-ScalingGranularity.AXISWISE-a_shape1] SKIPPED 2025-09-09T14:09:36.1910977Z test/float8/test_base.py::TestFloat8TrainingTensor::test_axiswise_gemm[ScalingGranularity.TENSORWISE-ScalingGranularity.AXISWISE-a_shape2] SKIPPED 2025-09-09T14:09:36.1912123Z test/float8/test_base.py::TestFloat8TrainingTensor::test_fp8_dtype SKIPPED 2025-09-09T14:09:36.1913602Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1915712Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1917767Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1919760Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1921789Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1923914Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1925959Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1928021Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1929920Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1931956Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1933914Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1936026Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[False-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1937926Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1939876Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1941906Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1944011Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1945891Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1947785Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-False-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1949892Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1951891Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1953849Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1955803Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape0-True] SKIPPED 2025-09-09T14:09:36.1957877Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape1-True] SKIPPED 2025-09-09T14:09:36.1959910Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_config_params[True-True-linear_dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-x_shape2-True] SKIPPED 2025-09-09T14:09:36.1961514Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.1963138Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.1964625Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.1966023Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.1967510Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.1969070Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-True-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2521139Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2522618Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2524052Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2525471Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2526961Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2528383Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype0-False-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2529876Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2531280Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2532684Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2534091Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2535553Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2536977Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-True-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2538450Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2539891Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2541304Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2542988Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2544474Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2545912Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype1-False-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2547377Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2548904Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2550317Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2551728Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2553191Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2554594Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-True-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2556098Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape0-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2557617Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape0-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2559021Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape1-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2560471Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape1-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2561888Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape2-Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.2563353Z test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.2564754Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.TENSORWISE-linear_dtype0-True] SKIPPED 2025-09-09T14:09:36.2566138Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.TENSORWISE-linear_dtype1-True] SKIPPED 2025-09-09T14:09:36.2567439Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.TENSORWISE-linear_dtype2-True] SKIPPED 2025-09-09T14:09:36.2568730Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE-linear_dtype0-True] SKIPPED 2025-09-09T14:09:36.2570001Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE-linear_dtype1-True] SKIPPED 2025-09-09T14:09:36.2571261Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE-linear_dtype2-True] SKIPPED 2025-09-09T14:09:36.2572638Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE_WITH_GW_HP-linear_dtype0-True] SKIPPED 2025-09-09T14:09:36.2573988Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE_WITH_GW_HP-linear_dtype1-True] SKIPPED 2025-09-09T14:09:36.2575348Z test/float8/test_base.py::TestFloat8Linear::test_autocast_outputs[Float8LinearRecipeName.ROWWISE_WITH_GW_HP-linear_dtype2-True] SKIPPED 2025-09-09T14:09:36.2576402Z test/float8/test_base.py::TestFloat8Linear::test_repr PASSED 2025-09-09T14:09:36.2577149Z test/float8/test_base.py::TestFloat8Linear::test_inference_mode SKIPPED 2025-09-09T14:09:36.2577882Z test/float8/test_base.py::TestFloat8Linear::test_quantize SKIPPED (C...) 2025-09-09T14:09:36.2578699Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[True-base_dtype0] SKIPPED 2025-09-09T14:09:36.2579597Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[True-base_dtype1] SKIPPED 2025-09-09T14:09:36.2580579Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[True-base_dtype2] SKIPPED 2025-09-09T14:09:36.2581503Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[False-base_dtype0] SKIPPED 2025-09-09T14:09:36.2582430Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[False-base_dtype1] SKIPPED 2025-09-09T14:09:36.2583321Z test/float8/test_base.py::TestScaledMM::test_scaled_mm_vs_emulated[False-base_dtype2] SKIPPED 2025-09-09T14:09:36.2584148Z test/float8/test_base.py::TestScaledMM::test_different_configs_error SKIPPED 2025-09-09T14:09:36.2585074Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[True-base_dtype0] SKIPPED 2025-09-09T14:09:36.2585890Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[True-base_dtype1] SKIPPED 2025-09-09T14:09:36.2586716Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[True-base_dtype2] SKIPPED 2025-09-09T14:09:36.2587540Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype0] SKIPPED 2025-09-09T14:09:36.2588370Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype1] SKIPPED 2025-09-09T14:09:36.2589247Z test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype2] SKIPPED 2025-09-09T14:09:36.2590090Z test/float8/test_base.py::TestNumerics::test_small_amax_float16[float8_dtype0] SKIPPED 2025-09-09T14:09:36.2590930Z test/float8/test_base.py::TestNumerics::test_small_amax_float16[float8_dtype1] SKIPPED 2025-09-09T14:09:36.2591792Z test/float8/test_base.py::TestNumerics::test_small_amax_float16[float8_dtype2] SKIPPED 2025-09-09T14:09:36.2592667Z test/float8/test_base.py::TestNumerics::test_small_amax_float16[float8_dtype3] SKIPPED 2025-09-09T14:09:36.2593492Z test/float8/test_base.py::TestFloat8LinearUtils::test_fp8_tensor_statistics PASSED 2025-09-09T14:09:36.2594355Z test/float8/test_base.py::TestFloat8LinearUtils::test_swap_linears_with_filters PASSED 2025-09-09T14:09:36.2595256Z test/float8/test_base.py::TestFloat8LinearUtils::test_swap_root_linear PASSED 2025-09-09T14:09:36.2596151Z test/float8/test_base.py::TestFloat8LinearUtils::test_swap_root_linear_with_children_raises PASSED 2025-09-09T14:09:36.2597064Z test/float8/test_base.py::TestFloat8LinearUtils::test_swap_submodule_linears PASSED 2025-09-09T14:09:36.6015249Z test/float8/test_base.py::TestFloat8LinearUtils::test_swap_submodule_linears_with_skip PASSED 2025-09-09T14:09:36.6016478Z test/float8/test_compile.py::test_eager_only[dtype0-True-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-True] SKIPPED 2025-09-09T14:09:36.6018029Z test/float8/test_compile.py::test_eager_only[dtype1-True-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-True] SKIPPED 2025-09-09T14:09:36.6019428Z test/float8/test_compile.py::test_aot_eager[dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-True-True] SKIPPED 2025-09-09T14:09:36.6020955Z test/float8/test_compile.py::test_aot_eager[dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-True-True] SKIPPED 2025-09-09T14:09:36.6022455Z test/float8/test_compile.py::test_inductor_from_config_params[dtype0-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-False-True] SKIPPED 2025-09-09T14:09:36.6024279Z test/float8/test_compile.py::test_inductor_from_config_params[dtype1-ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC-False-True] SKIPPED 2025-09-09T14:09:36.6025697Z test/float8/test_compile.py::test_inductor_from_recipe[Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.6026718Z test/float8/test_compile.py::test_inductor_from_recipe[Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.6027649Z test/float8/test_compile.py::TestGraphBreaks::test_float8_graph_input SKIPPED 2025-09-09T14:09:36.6028606Z test/float8/test_compile.py::TestGraphBreaks::test_float8_graph_output SKIPPED 2025-09-09T14:09:36.6029475Z test/float8/test_compile.py::TestGraphBreaks::test_float8_with_graph_break_in_the_middle SKIPPED 2025-09-09T14:09:36.6030356Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[True-dtype0] SKIPPED 2025-09-09T14:09:36.6031167Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[True-dtype1] SKIPPED 2025-09-09T14:09:36.6031990Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[True-dtype2] SKIPPED 2025-09-09T14:09:36.6032882Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[False-dtype0] SKIPPED 2025-09-09T14:09:36.6033697Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[False-dtype1] SKIPPED 2025-09-09T14:09:36.6034521Z test/float8/test_compile.py::test_dynamic_scale_numeric_parity[False-dtype2] SKIPPED 2025-09-09T14:09:36.6035485Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case0] SKIPPED 2025-09-09T14:09:36.6036462Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case1] SKIPPED 2025-09-09T14:09:36.6037501Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case2] SKIPPED 2025-09-09T14:09:36.6038457Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case3] SKIPPED 2025-09-09T14:09:36.6039412Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case4] SKIPPED 2025-09-09T14:09:36.6040354Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case5] SKIPPED 2025-09-09T14:09:36.6041309Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case6] SKIPPED 2025-09-09T14:09:36.6042331Z test/float8/test_float8_utils.py::test_round_scale_down_to_power_of_2_valid_inputs[test_case7] SKIPPED 2025-09-09T14:09:36.6043178Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype0] PASSED 2025-09-09T14:09:36.6043946Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype1] PASSED 2025-09-09T14:09:36.6044803Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype2] PASSED 2025-09-09T14:09:36.6045563Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype3] PASSED 2025-09-09T14:09:36.6046329Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype4] PASSED 2025-09-09T14:09:36.6047082Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype5] PASSED 2025-09-09T14:09:36.6047844Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype6] PASSED 2025-09-09T14:09:36.6048608Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype7] PASSED 2025-09-09T14:09:36.6050067Z test/float8/test_numerics_integration.py::TestFloat8NumericsIntegrationTest::test_encoder_fw_bw_from_config_params[ScalingType.DYNAMIC-ScalingType.DYNAMIC-ScalingType.DYNAMIC] SKIPPED 2025-09-09T14:09:36.6051830Z test/float8/test_numerics_integration.py::TestFloat8NumericsIntegrationTest::test_encoder_fw_bw_from_recipe[Float8LinearRecipeName.ROWWISE] SKIPPED 2025-09-09T14:09:36.6053519Z test/float8/test_numerics_integration.py::TestFloat8NumericsIntegrationTest::test_encoder_fw_bw_from_recipe[Float8LinearRecipeName.ROWWISE_WITH_GW_HP] SKIPPED 2025-09-09T14:09:36.6054661Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_2bit SKIPPED (N...) 2025-09-09T14:09:36.6055359Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_3bit SKIPPED (N...) 2025-09-09T14:09:36.6056042Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_4bit SKIPPED (N...) 2025-09-09T14:09:36.6056807Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_5bit SKIPPED (N...) 2025-09-09T14:09:36.6057558Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_6bit SKIPPED (N...) 2025-09-09T14:09:36.6058252Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_7bit SKIPPED (N...) 2025-09-09T14:09:36.6059021Z test/hqq/test_hqq_affine.py::TestHQQ::test_hqq_plain_8bit SKIPPED (N...) 2025-09-09T14:09:36.6059819Z test/integration/test_integration.py::SmoothquantUnitTest::test_debug_x_absmax PASSED 2025-09-09T14:09:36.6060680Z test/integration/test_integration.py::SmoothquantUnitTest::test_figure_4 PASSED 2025-09-09T14:09:36.6061575Z test/integration/test_integration.py::SmoothquantUnitTest::test_selective_torch_compile PASSED 2025-09-09T14:09:36.6063122Z test/integration/test_integration.py::SmoothquantUnitTest::test_smooth_linear_cpu [W909 14:09:36.236371648 qlinear_dynamic.cpp:251] Warning: Currently, qnnpack incorrectly ignores reduce_range when it is set to true; this may change in a future release. (function operator()) 2025-09-09T14:09:36.6064494Z PASSED 2025-09-09T14:09:36.6065081Z test/integration/test_integration.py::SmoothquantUnitTest::test_smooth_linear_cuda SKIPPED 2025-09-09T14:09:36.6066120Z test/integration/test_integration.py::SmoothquantUnitTest::test_smooth_linear_edge_cases PASSED 2025-09-09T14:09:36.6067008Z test/integration/test_integration.py::SmoothquantUnitTest::test_swap PASSED 2025-09-09T14:09:36.6067823Z test/integration/test_integration.py::SmoothquantUnitTest::test_tensors PASSED 2025-09-09T14:09:36.6068780Z test/integration/test_integration.py::SmoothquantUnitTest::test_weight_t_and_non_t_numerics_match SKIPPED 2025-09-09T14:09:36.6069750Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test__int_mm SKIPPED 2025-09-09T14:09:36.6070837Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test__int_mm_eager_and_torch_compile_numerics SKIPPED 2025-09-09T14:09:36.6072065Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_dynamic_quant_per_channel_numerics_cpu PASSED 2025-09-09T14:09:36.6073270Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_dynamic_quant_per_channel_numerics_cuda SKIPPED 2025-09-09T14:09:36.6074457Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_per_token_linear_cpu PASSED 2025-09-09T14:09:36.6075566Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_per_token_linear_cuda SKIPPED 2025-09-09T14:09:36.6076614Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_quantize_per_token_cpu PASSED 2025-09-09T14:09:36.6077670Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_quantize_per_token_cuda SKIPPED 2025-09-09T14:09:36.6078783Z test/integration/test_integration.py::PythonQuantUtilOpUnitTest::test_quantize_per_token_xpu SKIPPED 2025-09-09T14:09:36.6079927Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_0_cpu SKIPPED 2025-09-09T14:09:36.6081166Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_1_cpu SKIPPED 2025-09-09T14:09:36.6082351Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_2_cpu SKIPPED 2025-09-09T14:09:36.6083612Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_3_cuda SKIPPED 2025-09-09T14:09:36.6084789Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_4_cuda SKIPPED 2025-09-09T14:09:36.6086041Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_rowwise_scaling_subclass_5_cuda SKIPPED 2025-09-09T14:09:36.6087238Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_0_cpu SKIPPED 2025-09-09T14:09:36.6088582Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_1_cpu SKIPPED 2025-09-09T14:09:36.6089793Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_2_cpu SKIPPED 2025-09-09T14:09:36.6090995Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2328329Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2329588Z test/integration/test_integration.py::TestSubclass::test_aq_float8_dynamic_quant_tensorwise_scaling_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2330720Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2331792Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2332866Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2333933Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2335014Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2336069Z test/integration/test_integration.py::TestSubclass::test_aq_float8_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2337109Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2338108Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2339113Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2340129Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2341126Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2342146Z test/integration/test_integration.py::TestSubclass::test_aq_int8_dynamic_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2343174Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2344232Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2345308Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2346354Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2347423Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2348487Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_2_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2349778Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2350844Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2351891Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2352953Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2354019Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2355249Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_3_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2356307Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2357341Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2358386Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2359437Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2360472Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2361519Z test/integration/test_integration.py::TestSubclass::test_aq_int8_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2362543Z test/integration/test_integration.py::TestSubclass::test_autoquantizable_flatten_unflatten PASSED 2025-09-09T14:09:37.2363602Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:37.2364730Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:37.2365851Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:37.2366983Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2368120Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2369237Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2370417Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_0_cpu SKIPPED 2025-09-09T14:09:37.2371605Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_1_cpu SKIPPED 2025-09-09T14:09:37.2372811Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_2_cpu SKIPPED 2025-09-09T14:09:37.2374014Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_3_cuda SKIPPED 2025-09-09T14:09:37.2375210Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_4_cuda SKIPPED 2025-09-09T14:09:37.2376422Z test/integration/test_integration.py::TestSubclass::test_dequantize_int4_weight_only_quant_subclass_grouped_5_cuda SKIPPED 2025-09-09T14:09:37.2377555Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_0_cpu PASSED 2025-09-09T14:09:37.2378644Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_1_cpu PASSED 2025-09-09T14:09:37.2379727Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_2_cpu PASSED 2025-09-09T14:09:37.2380884Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2381989Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2383088Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_dynamic_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2384192Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_0_cpu PASSED 2025-09-09T14:09:37.2385377Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_1_cpu PASSED 2025-09-09T14:09:37.2386486Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_2_cpu PASSED 2025-09-09T14:09:37.2387621Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:37.2388765Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:37.2389891Z test/integration/test_integration.py::TestSubclass::test_dequantize_int8_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:37.2390889Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_0_cpu SKIPPED 2025-09-09T14:09:37.2391751Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_1_cpu SKIPPED 2025-09-09T14:09:37.2392622Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_2_cpu SKIPPED 2025-09-09T14:09:37.2393512Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_3_cuda SKIPPED 2025-09-09T14:09:37.2394377Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_4_cuda SKIPPED 2025-09-09T14:09:37.2395339Z test/integration/test_integration.py::TestSubclass::test_gemlite_layout_5_cuda SKIPPED 2025-09-09T14:09:37.2396328Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_0_cpu SKIPPED 2025-09-09T14:09:37.2397425Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_1_cpu SKIPPED 2025-09-09T14:09:37.2398573Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_2_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:37.2399663Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:37.2400286Z return fn(*args, **kwargs) 2025-09-09T14:09:37.2400471Z 2025-09-09T14:09:37.2400634Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:37.2401307Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:37.2401921Z return fn(*args, **kwargs) 2025-09-09T14:09:37.2402104Z 2025-09-09T14:09:42.0914800Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0915707Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0916352Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0916549Z 2025-09-09T14:09:42.0916709Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0917372Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0918004Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0918179Z 2025-09-09T14:09:42.0918352Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0918998Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0919606Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0919786Z 2025-09-09T14:09:42.0920223Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0920893Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0921503Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0921680Z 2025-09-09T14:09:42.0921835Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0922498Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0923098Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0923395Z 2025-09-09T14:09:42.0923688Z PASSED 2025-09-09T14:09:42.0924414Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_3_cuda SKIPPED 2025-09-09T14:09:42.0925516Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_4_cuda SKIPPED 2025-09-09T14:09:42.0926617Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_hqq_quant_subclass_api_5_cuda SKIPPED 2025-09-09T14:09:42.0927664Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:42.0928696Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:42.0929718Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:42.0930727Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:42.0931761Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:42.0932772Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:42.0933823Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_0_cpu SKIPPED 2025-09-09T14:09:42.0934888Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_1_cpu SKIPPED 2025-09-09T14:09:42.0935928Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_2_cpu PASSED 2025-09-09T14:09:42.0936987Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_3_cuda SKIPPED 2025-09-09T14:09:42.0938042Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_4_cuda SKIPPED 2025-09-09T14:09:42.0939114Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_5_cuda SKIPPED 2025-09-09T14:09:42.0940209Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_0_cpu SKIPPED 2025-09-09T14:09:42.0941332Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_1_cpu SKIPPED 2025-09-09T14:09:42.0942507Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_2_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0943607Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0944216Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0944394Z 2025-09-09T14:09:42.0944563Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0945219Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0945829Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0946004Z 2025-09-09T14:09:42.0946160Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0946824Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0947509Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0947688Z 2025-09-09T14:09:42.0947841Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0948503Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0949096Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0949285Z 2025-09-09T14:09:42.0949441Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0950087Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0950755Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0950932Z 2025-09-09T14:09:42.0951098Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:09:42.0951738Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:09:42.0952341Z return fn(*args, **kwargs) 2025-09-09T14:09:42.0952519Z 2025-09-09T14:09:42.0952638Z PASSED 2025-09-09T14:09:42.0953351Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_3_cuda SKIPPED 2025-09-09T14:09:42.0954491Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_4_cuda SKIPPED 2025-09-09T14:09:42.0955698Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_api_grouped_5_cuda SKIPPED 2025-09-09T14:09:42.0956817Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_0_cpu SKIPPED 2025-09-09T14:09:42.0957913Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_1_cpu SKIPPED 2025-09-09T14:09:42.0959013Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_2_cpu SKIPPED 2025-09-09T14:09:42.0960125Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_3_cuda SKIPPED 2025-09-09T14:09:42.0961229Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_4_cuda SKIPPED 2025-09-09T14:09:42.0962337Z test/integration/test_integration.py::TestSubclass::test_int4_weight_only_quant_subclass_grouped_5_cuda SKIPPED 2025-09-09T14:09:42.0963369Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:42.0964358Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:42.0965355Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:42.0966332Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:42.0967326Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:42.0968311Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_5_cuda SKIPPED 2025-09-09T14:09:42.0969329Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_00_cpu SKIPPED 2025-09-09T14:09:42.0970355Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_01_cpu SKIPPED 2025-09-09T14:09:42.0971371Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_02_cpu SKIPPED 2025-09-09T14:09:42.0972409Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_03_cpu SKIPPED 2025-09-09T14:09:42.0973424Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_04_cpu SKIPPED 2025-09-09T14:09:42.0974445Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_05_cpu SKIPPED 2025-09-09T14:09:42.0975556Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_06_cuda SKIPPED 2025-09-09T14:09:42.0976590Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_07_cuda SKIPPED 2025-09-09T14:09:42.0977623Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_08_cuda SKIPPED 2025-09-09T14:09:42.0978665Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_09_cuda SKIPPED 2025-09-09T14:09:42.0979761Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_10_cuda SKIPPED 2025-09-09T14:09:42.0980798Z test/integration/test_integration.py::TestSubclass::test_int8_dynamic_quant_subclass_api_11_cuda SKIPPED 2025-09-09T14:09:42.0981816Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_0_cpu SKIPPED 2025-09-09T14:09:42.0982845Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_1_cpu SKIPPED 2025-09-09T14:09:42.0983877Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_2_cpu SKIPPED 2025-09-09T14:09:42.0984890Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_3_cuda SKIPPED 2025-09-09T14:09:42.0985915Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_4_cuda SKIPPED 2025-09-09T14:09:42.0986928Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_5_cuda SKIPPED 2025-09-09T14:10:08.3977411Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_0_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3978543Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3979156Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3979391Z 2025-09-09T14:10:08.3979553Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3980204Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3980816Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3981033Z 2025-09-09T14:10:08.3981189Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3981853Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3982462Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3982637Z 2025-09-09T14:10:08.3982806Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3983446Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3984052Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3984229Z 2025-09-09T14:10:08.3984388Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3985045Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3985657Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3985832Z 2025-09-09T14:10:08.3985986Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3986644Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3987241Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3987434Z 2025-09-09T14:10:08.3987703Z PASSED 2025-09-09T14:10:08.3988409Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_1_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3989487Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3990097Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3990277Z 2025-09-09T14:10:08.3990729Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3991394Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3991988Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3992179Z 2025-09-09T14:10:08.3992334Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3992997Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3993701Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3993890Z 2025-09-09T14:10:08.3994042Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3994688Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3995374Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3995551Z 2025-09-09T14:10:08.3995722Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3996370Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3996975Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3997152Z 2025-09-09T14:10:08.3997307Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.3997966Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.3998561Z return fn(*args, **kwargs) 2025-09-09T14:10:08.3998752Z 2025-09-09T14:10:08.3998897Z PASSED 2025-09-09T14:10:08.3999608Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_2_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.4000678Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.4001288Z return fn(*args, **kwargs) 2025-09-09T14:10:08.4001464Z 2025-09-09T14:10:08.4001622Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.4002277Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.4002882Z return fn(*args, **kwargs) 2025-09-09T14:10:08.4003057Z 2025-09-09T14:10:08.4003210Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.4003864Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.4004461Z return fn(*args, **kwargs) 2025-09-09T14:10:08.4004654Z 2025-09-09T14:10:08.4004809Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:08.4005454Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 68, in inner 2025-09-09T14:10:08.4006058Z return fn(*args, **kwargs) 2025-09-09T14:10:08.4006232Z 2025-09-09T14:10:08.4006360Z PASSED 2025-09-09T14:10:08.4007032Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_3_cuda SKIPPED 2025-09-09T14:10:08.4008109Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_4_cuda SKIPPED 2025-09-09T14:10:08.4009166Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_subclass_api_5_cuda SKIPPED 2025-09-09T14:10:08.4010396Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_0_cpu Autotune Choices Stats: 2025-09-09T14:10:08.4011353Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_0", "best_time": 0.00381700004936647} 2025-09-09T14:10:08.4011962Z AUTOTUNE packed_linear(32x64, 1982689x1, 32x64) 2025-09-09T14:10:08.4012310Z strides: [64, 1], [1, 0], [64, 1] 2025-09-09T14:10:08.4012636Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-09T14:10:08.4013024Z cpp_CppMicroGemmFP32Vec_0 0.0038 ms 100.0% 2025-09-09T14:10:08.4013342Z _mkl_linear 0.0192 ms 19.9% 2025-09-09T14:10:08.4014084Z SingleProcess AUTOTUNE benchmarking takes 0.2508 seconds and 2.4858 seconds precompiling for 2 choices 2025-09-09T14:10:08.4014670Z Autotune Choices Stats: 2025-09-09T14:10:08.4015200Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_1", "best_time": 0.0034475000063594052} 2025-09-09T14:10:08.4015826Z AUTOTUNE packed_linear(32x32, 1982689x1, 32x32) 2025-09-09T14:10:08.4016152Z strides: [32, 1], [1, 0], [32, 1] 2025-09-09T14:10:08.4016485Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-09T14:10:08.4016944Z cpp_CppMicroGemmFP32Vec_1 0.0034 ms 100.0% 2025-09-09T14:10:08.4017274Z _mkl_linear 0.0185 ms 18.7% 2025-09-09T14:10:08.4017818Z SingleProcess AUTOTUNE benchmarking takes 0.2506 seconds and 2.4128 seconds precompiling for 2 choices 2025-09-09T14:10:08.4018423Z PASSED 2025-09-09T14:10:08.4019011Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_1_cpu Autotune Choices Stats: 2025-09-09T14:10:08.4019933Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_2", "best_time": 0.004072000024279987} 2025-09-09T14:10:08.4020518Z AUTOTUNE mm(32x64, 64x32) 2025-09-09T14:10:08.4020768Z strides: [64, 1], [1, 64] 2025-09-09T14:10:08.4021039Z dtypes: torch.float16, torch.float16 2025-09-09T14:10:08.4021375Z cpp_CppMicroGemmFP32Vec_2 0.0041 ms 100.0% 2025-09-09T14:10:08.4021683Z mm 0.0293 ms 13.9% 2025-09-09T14:10:08.4022190Z SingleProcess AUTOTUNE benchmarking takes 0.2548 seconds and 2.5615 seconds precompiling for 2 choices 2025-09-09T14:10:08.4022757Z Autotune Choices Stats: 2025-09-09T14:10:08.4023289Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_3", "best_time": 0.0036620000400944264} 2025-09-09T14:10:08.4023893Z AUTOTUNE mm(32x32, 32x32) 2025-09-09T14:10:08.4024137Z strides: [32, 1], [1, 32] 2025-09-09T14:10:08.4024413Z dtypes: torch.float16, torch.float16 2025-09-09T14:10:08.4024742Z cpp_CppMicroGemmFP32Vec_3 0.0037 ms 100.0% 2025-09-09T14:10:08.4025066Z mm 0.0251 ms 14.6% 2025-09-09T14:10:08.4025564Z SingleProcess AUTOTUNE benchmarking takes 0.2552 seconds and 2.5527 seconds precompiling for 2 choices 2025-09-09T14:10:08.4026163Z PASSED 2025-09-09T14:10:08.4026750Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_2_cpu Autotune Choices Stats: 2025-09-09T14:10:08.4027678Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_4", "best_time": 0.003961000004437665} 2025-09-09T14:10:08.4028304Z AUTOTUNE _weight_int8pack_mm(32x64, 32x64, 32) 2025-09-09T14:10:08.4028634Z strides: [64, 1], [64, 1], [1] 2025-09-09T14:10:08.4028959Z dtypes: torch.bfloat16, torch.int8, torch.bfloat16 2025-09-09T14:10:08.4029337Z cpp_CppMicroGemmFP32Vec_4 0.0040 ms 100.0% 2025-09-09T14:10:08.4029683Z _weight_int8pack_mm 0.0178 ms 22.3% 2025-09-09T14:10:08.4030261Z SingleProcess AUTOTUNE benchmarking takes 0.2507 seconds and 2.5381 seconds precompiling for 2 choices 2025-09-09T14:10:08.4030833Z Autotune Choices Stats: 2025-09-09T14:10:08.4031366Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmFP32Vec_5", "best_time": 0.003689999971356883} 2025-09-09T14:10:08.4031968Z AUTOTUNE _weight_int8pack_mm(32x32, 32x32, 32) 2025-09-09T14:10:08.4032302Z strides: [32, 1], [32, 1], [1] 2025-09-09T14:10:08.4032611Z dtypes: torch.bfloat16, torch.int8, torch.bfloat16 2025-09-09T14:10:08.4032992Z cpp_CppMicroGemmFP32Vec_5 0.0037 ms 100.0% 2025-09-09T14:10:08.4033319Z _weight_int8pack_mm 0.0174 ms 21.2% 2025-09-09T14:10:08.4034083Z SingleProcess AUTOTUNE benchmarking takes 0.2509 seconds and 2.5578 seconds precompiling for 2 choices 2025-09-09T14:10:08.4034698Z PASSED 2025-09-09T14:10:08.4035427Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_3_cuda SKIPPED 2025-09-09T14:10:08.4036593Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_4_cuda SKIPPED 2025-09-09T14:10:08.4037648Z test/integration/test_integration.py::TestSubclass::test_int8_weight_only_quant_with_freeze_5_cuda SKIPPED 2025-09-09T14:10:08.4038592Z test/integration/test_integration.py::TestDynamicQuant::test_dynamic_quant PASSED 2025-09-09T14:10:08.4039615Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_groupwise_embedding_quant PASSED 2025-09-09T14:10:08.4040708Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_groupwise_quant PASSED 2025-09-09T14:10:08.4041790Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant PASSED 2025-09-09T14:10:08.4042852Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_0_cpu SKIPPED 2025-09-09T14:10:08.4044019Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_1_cpu SKIPPED 2025-09-09T14:10:08.4045174Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_2_cpu SKIPPED 2025-09-09T14:10:08.4046330Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_3_cuda SKIPPED 2025-09-09T14:10:14.5727822Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_4_cuda SKIPPED 2025-09-09T14:10:14.5729061Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_force_mixed_mm_5_cuda SKIPPED 2025-09-09T14:10:14.5730276Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_0_cpu SKIPPED 2025-09-09T14:10:14.5731539Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_1_cpu SKIPPED 2025-09-09T14:10:14.5732683Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_2_cpu SKIPPED 2025-09-09T14:10:14.5733896Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_3_cuda SKIPPED 2025-09-09T14:10:14.5735044Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_4_cuda SKIPPED 2025-09-09T14:10:14.5736182Z test/integration/test_integration.py::TestWeightOnlyInt8Quant::test_weight_only_quant_use_mixed_mm_5_cuda SKIPPED 2025-09-09T14:10:14.5737262Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_0_cpu SKIPPED 2025-09-09T14:10:14.5738240Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_1_cpu SKIPPED 2025-09-09T14:10:14.5739256Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_2_cpu SKIPPED 2025-09-09T14:10:14.5740228Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_3_cuda SKIPPED 2025-09-09T14:10:14.5741216Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_4_cuda SKIPPED 2025-09-09T14:10:14.5742240Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_dqtensors_5_cuda SKIPPED 2025-09-09T14:10:14.5743221Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_0_cpu SKIPPED 2025-09-09T14:10:14.5744229Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_1_cpu SKIPPED 2025-09-09T14:10:14.5745303Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_2_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5746252Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5746777Z x = self.lin1(x) 2025-09-09T14:10:14.5746924Z 2025-09-09T14:10:14.5747084Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5747891Z File "/pytorch/ao/test/integration/test_integration.py", line 1249, in forward 2025-09-09T14:10:14.5748367Z x = self.relu(x) 2025-09-09T14:10:14.5748511Z 2025-09-09T14:10:14.5748669Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5749268Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5749725Z x = self.lin2(x) 2025-09-09T14:10:14.5749881Z 2025-09-09T14:10:14.5750017Z PASSED 2025-09-09T14:10:14.5750650Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_3_cuda SKIPPED 2025-09-09T14:10:14.5751795Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_4_cuda SKIPPED 2025-09-09T14:10:14.5752819Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int4woqtensors_5_cuda SKIPPED 2025-09-09T14:10:14.5753888Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_0_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5754892Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5755442Z x = self.lin1(x) 2025-09-09T14:10:14.5755600Z 2025-09-09T14:10:14.5755758Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5756290Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5756748Z x = self.lin2(x) 2025-09-09T14:10:14.5756890Z 2025-09-09T14:10:14.5757105Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5757639Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5758108Z x = self.lin1(x) 2025-09-09T14:10:14.5758253Z 2025-09-09T14:10:14.5758410Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5758939Z File "/pytorch/ao/test/integration/test_integration.py", line 1249, in forward 2025-09-09T14:10:14.5759404Z x = self.relu(x) 2025-09-09T14:10:14.5759550Z 2025-09-09T14:10:14.5759709Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5760239Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5760694Z x = self.lin2(x) 2025-09-09T14:10:14.5760852Z 2025-09-09T14:10:14.5761007Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5761524Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5761989Z x = self.lin2(x) 2025-09-09T14:10:14.5762194Z 2025-09-09T14:10:14.5762345Z PASSED 2025-09-09T14:10:14.5763035Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_1_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5763972Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5764426Z x = self.lin1(x) 2025-09-09T14:10:14.5764579Z 2025-09-09T14:10:14.5764738Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5765330Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5765781Z x = self.lin2(x) 2025-09-09T14:10:14.5765923Z 2025-09-09T14:10:14.5766088Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5766598Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5767061Z x = self.lin1(x) 2025-09-09T14:10:14.5767207Z 2025-09-09T14:10:14.5767369Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5767902Z File "/pytorch/ao/test/integration/test_integration.py", line 1249, in forward 2025-09-09T14:10:14.5768370Z x = self.relu(x) 2025-09-09T14:10:14.5768513Z 2025-09-09T14:10:14.5768668Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5769197Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5769765Z x = self.lin2(x) 2025-09-09T14:10:14.5769923Z 2025-09-09T14:10:14.5770138Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5770657Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5771132Z x = self.lin2(x) 2025-09-09T14:10:14.5771273Z 2025-09-09T14:10:14.5771413Z PASSED 2025-09-09T14:10:14.5772098Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_2_cpu cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5773170Z File "/pytorch/ao/test/integration/test_integration.py", line 1248, in forward 2025-09-09T14:10:14.5773632Z x = self.lin1(x) 2025-09-09T14:10:14.5773788Z 2025-09-09T14:10:14.5773941Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5774458Z File "/pytorch/ao/test/integration/test_integration.py", line 1249, in forward 2025-09-09T14:10:14.5774927Z x = self.relu(x) 2025-09-09T14:10:14.5775068Z 2025-09-09T14:10:14.5775243Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5775762Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5776230Z x = self.lin2(x) 2025-09-09T14:10:14.5776373Z 2025-09-09T14:10:14.5776527Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:10:14.5777066Z File "/pytorch/ao/test/integration/test_integration.py", line 1250, in forward 2025-09-09T14:10:14.5777521Z x = self.lin2(x) 2025-09-09T14:10:14.5777675Z 2025-09-09T14:10:14.5777802Z PASSED 2025-09-09T14:10:14.5778511Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_3_cuda SKIPPED 2025-09-09T14:10:14.5779518Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_4_cuda SKIPPED 2025-09-09T14:10:14.5780533Z test/integration/test_integration.py::TestSaveLoadMeta::test_save_load_int8woqtensors_5_cuda SKIPPED 2025-09-09T14:10:14.5781529Z test/integration/test_integration.py::TorchCompileUnitTest::test_fullgraph SKIPPED 2025-09-09T14:10:14.5782364Z test/integration/test_integration.py::UtilsUnitTest::test_shape_logger PASSED 2025-09-09T14:10:14.5783369Z test/integration/test_integration.py::SmoothquantIntegrationTest::test_non_dynamically_quantizable_linear SKIPPED 2025-09-09T14:10:14.5784280Z test/integration/test_integration.py::SmoothquantIntegrationTest::test_on_dummy_distilbert 2025-09-09T14:10:14.5784877Z tokenizer_config.json: 0% 0.00/48.0 [00:00) 2025-09-09T14:14:06.9290552Z converted model pt2e: GraphModule( 2025-09-09T14:14:06.9290844Z (conv): Module() 2025-09-09T14:14:06.9291063Z (bn): Module() 2025-09-09T14:14:06.9291279Z ) 2025-09-09T14:14:06.9291379Z 2025-09-09T14:14:06.9291383Z 2025-09-09T14:14:06.9291388Z 2025-09-09T14:14:06.9291479Z def forward(self, x): 2025-09-09T14:14:06.9291793Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:06.9292158Z conv_bias = self.conv.bias 2025-09-09T14:14:06.9292491Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:06.9293301Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:14:06.9294726Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:06.9295929Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:06.9296454Z _scale_0 = self._scale_0 2025-09-09T14:14:06.9296745Z _zero_point_0 = self._zero_point_0 2025-09-09T14:14:06.9297083Z quantize_per_channel = self._frozen_param0 2025-09-09T14:14:06.9298099Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:14:06.9299669Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:14:06.9301056Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.010256201960146427, -10, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:14:06.9302639Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010256201960146427, -10, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:06.9312758Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:14:06.9313258Z 2025-09-09T14:14:06.9313563Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:06.9313988Z onverted model fx: GraphModule( 2025-09-09T14:14:06.9314412Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:14:06.9314905Z ) 2025-09-09T14:14:06.9315009Z 2025-09-09T14:14:06.9315226Z 2025-09-09T14:14:06.9315230Z 2025-09-09T14:14:06.9315320Z def forward(self, x): 2025-09-09T14:14:06.9316016Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:14:06.9317470Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:06.9318640Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:14:06.9319615Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.010256201960146427, -10, -128, 127, torch.int8); conv = None 2025-09-09T14:14:06.9321104Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010256201960146427, -10, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:06.9322140Z return dequantize_per_tensor_default_1 2025-09-09T14:14:06.9322434Z 2025-09-09T14:14:06.9322743Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:06.9323145Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:14:16.9966747Z [0., 0., 0.], 2025-09-09T14:14:16.9967135Z [0., 0., 0.]]]) 2025-09-09T14:14:16.9967487Z model pt2e: GraphModule( 2025-09-09T14:14:16.9967890Z (conv): Module() 2025-09-09T14:14:16.9968173Z (bn): Module() 2025-09-09T14:14:16.9968608Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:16.9970053Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:16.9971515Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:14:16.9972124Z ) 2025-09-09T14:14:16.9972423Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:16.9973521Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:14:16.9974814Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.3201889097690582, max_val=0.3243715763092041) 2025-09-09T14:14:16.9975401Z ) 2025-09-09T14:14:16.9975713Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:16.9976777Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0103]), zero_point=tensor([-10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:16.9978044Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.20903742313385, max_val=1.4068148136138916) 2025-09-09T14:14:16.9978627Z ) 2025-09-09T14:14:16.9978824Z ) 2025-09-09T14:14:16.9978929Z 2025-09-09T14:14:16.9978934Z 2025-09-09T14:14:16.9978937Z 2025-09-09T14:14:16.9979044Z def forward(self, x): 2025-09-09T14:14:16.9979351Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:16.9979739Z conv_weight = self.conv.weight 2025-09-09T14:14:16.9980314Z conv_bias = self.conv.bias 2025-09-09T14:14:16.9980608Z bn_weight = self.bn.weight 2025-09-09T14:14:16.9980881Z bn_bias = self.bn.bias 2025-09-09T14:14:16.9981172Z bn_running_mean = self.bn.running_mean 2025-09-09T14:14:16.9981518Z bn_running_var = self.bn.running_var 2025-09-09T14:14:16.9981882Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:16.9982387Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:16.9983050Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:16.9983759Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:14:16.9984191Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:14:16.9984662Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:14:16.9985151Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:14:16.9985731Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:14:16.9986377Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:14:16.9987071Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:14:16.9988198Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:14:16.9989205Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:14:16.9989814Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:14:16.9990468Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:14:16.9991085Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:14:16.9992090Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:14:16.9993167Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:14:16.9993839Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:14:16.9994284Z 2025-09-09T14:14:16.9994707Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:16.9995136Z model fx: GraphModule( 2025-09-09T14:14:16.9995486Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:16.9996576Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:16.9997865Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:14:16.9998447Z ) 2025-09-09T14:14:16.9998653Z (conv): ConvBn1d( 2025-09-09T14:14:16.9998894Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:14:16.9999358Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:14:16.9999879Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:17.0000943Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:14:17.0002255Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.3201889097690582, max_val=0.3243715763092041) 2025-09-09T14:14:17.0002847Z ) 2025-09-09T14:14:17.0003046Z ) 2025-09-09T14:14:17.0003345Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:17.0004527Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0103]), zero_point=tensor([-10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:17.0005807Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.20903742313385, max_val=1.4068148136138916) 2025-09-09T14:14:17.0006381Z ) 2025-09-09T14:14:17.0006578Z ) 2025-09-09T14:14:17.0006682Z 2025-09-09T14:14:17.0006686Z 2025-09-09T14:14:17.0006765Z 2025-09-09T14:14:17.0006858Z def forward(self, x): 2025-09-09T14:14:17.0007258Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:17.0007850Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:14:17.0008478Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:14:17.0008970Z return activation_post_process_1 2025-09-09T14:14:17.0009261Z 2025-09-09T14:14:17.0009576Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:17.0010549Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:14:17.0010886Z [0., 0., 0.], 2025-09-09T14:14:17.0011143Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:14:17.0011489Z converted model pt2e: GraphModule( 2025-09-09T14:14:17.0011777Z (conv): Module() 2025-09-09T14:14:17.0012013Z (bn): Module() 2025-09-09T14:14:17.0012238Z ) 2025-09-09T14:14:17.0012343Z 2025-09-09T14:14:17.0012359Z 2025-09-09T14:14:17.0012363Z 2025-09-09T14:14:17.0012455Z def forward(self, x): 2025-09-09T14:14:17.0012781Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:17.0013149Z conv_bias = self.conv.bias 2025-09-09T14:14:17.0013490Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:17.0014303Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:14:17.0015754Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:17.0016967Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:17.0017520Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:14:17.0018437Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.002554106991738081, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:14:17.0019912Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:14:17.0021294Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.010258244350552559, -10, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:14:17.0022810Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010258244350552559, -10, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:14:17.0023979Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:14:17.0024440Z 2025-09-09T14:14:17.0024755Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:17.0025174Z onverted model fx: GraphModule( 2025-09-09T14:14:17.0025596Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:14:17.0026021Z ) 2025-09-09T14:14:17.0026127Z 2025-09-09T14:14:17.0026131Z 2025-09-09T14:14:17.0026135Z 2025-09-09T14:14:17.0026227Z def forward(self, x): 2025-09-09T14:14:17.0027192Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:14:17.0028626Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:17.0029807Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:14:17.0030808Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.010258244350552559, -10, -128, 127, torch.int8); conv = None 2025-09-09T14:14:17.0032394Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010258244350552559, -10, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:17.0033429Z return dequantize_per_tensor_default_1 2025-09-09T14:14:17.0033745Z 2025-09-09T14:14:17.0034048Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:17.0034524Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:14:17.0034865Z [0., 0., 0.], 2025-09-09T14:14:17.0035103Z [0., 0., 0.]]]) 2025-09-09T14:14:17.0035556Z PASSED 2025-09-09T14:14:30.1070586Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_fusion_cuda SKIPPED 2025-09-09T14:14:30.1072147Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_fusion_literal_args model pt2e: GraphModule( 2025-09-09T14:14:30.1073174Z (conv): Module() 2025-09-09T14:14:30.1073505Z (bn): Module() 2025-09-09T14:14:30.1073937Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1075428Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0161]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:30.1077212Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.276310682296753, max_val=1.8198994398117065) 2025-09-09T14:14:30.1077994Z ) 2025-09-09T14:14:30.1078375Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1079891Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:14:30.1081882Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.2760, 0.3011, 0.3298])) 2025-09-09T14:14:30.1082881Z ) 2025-09-09T14:14:30.1083272Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1084703Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0152]), zero_point=tensor([-12], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:30.1086398Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.7719206809997559, max_val=2.111994981765747) 2025-09-09T14:14:30.1087167Z ) 2025-09-09T14:14:30.1087414Z ) 2025-09-09T14:14:30.1087547Z 2025-09-09T14:14:30.1087552Z 2025-09-09T14:14:30.1087556Z 2025-09-09T14:14:30.1087692Z def forward(self, x): 2025-09-09T14:14:30.1088084Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:30.1088583Z conv_weight = self.conv.weight 2025-09-09T14:14:30.1088962Z conv_bias = self.conv.bias 2025-09-09T14:14:30.1089328Z bn_weight = self.bn.weight 2025-09-09T14:14:30.1089673Z bn_bias = self.bn.bias 2025-09-09T14:14:30.1090043Z bn_running_mean = self.bn.running_mean 2025-09-09T14:14:30.1090469Z bn_running_var = self.bn.running_var 2025-09-09T14:14:30.1091258Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:30.1091916Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:30.1092784Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:30.1093575Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:14:30.1094124Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:14:30.1094723Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:14:30.1095456Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:14:30.1096195Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:14:30.1097035Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:14:30.1097943Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:14:30.1099451Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like, [2], [4]); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:14:30.1100799Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:14:30.1101585Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:14:30.1102436Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:14:30.1103246Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:14:30.1104563Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:14:30.1105968Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:14:30.1106653Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:14:30.1107095Z 2025-09-09T14:14:30.1107394Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:30.1107801Z model fx: GraphModule( 2025-09-09T14:14:30.1108142Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1109224Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0161]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:30.1110676Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.276310682296753, max_val=1.8198994398117065) 2025-09-09T14:14:30.1111265Z ) 2025-09-09T14:14:30.1111468Z (conv): ConvBn1d( 2025-09-09T14:14:30.1111749Z 3, 3, kernel_size=(3,), stride=(2,), padding=(4,) 2025-09-09T14:14:30.1112233Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:14:30.1112763Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1113862Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:14:30.1115417Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.2760, 0.3011, 0.3298])) 2025-09-09T14:14:30.1116179Z ) 2025-09-09T14:14:30.1116365Z ) 2025-09-09T14:14:30.1116675Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:30.1117749Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0152]), zero_point=tensor([-12], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:30.1119167Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.7719206809997559, max_val=2.111994981765747) 2025-09-09T14:14:30.1119746Z ) 2025-09-09T14:14:30.1119939Z ) 2025-09-09T14:14:30.1120043Z 2025-09-09T14:14:30.1120047Z 2025-09-09T14:14:30.1120051Z 2025-09-09T14:14:30.1120157Z def forward(self, x): 2025-09-09T14:14:30.1120540Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:30.1121144Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:14:30.1121839Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:14:30.1122401Z return activation_post_process_1 2025-09-09T14:14:30.1122678Z 2025-09-09T14:14:30.1122988Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:30.1123408Z diff: tensor([[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:30.1123695Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:30.1124016Z [0., 0., 0., 0., 0., 0.]]], grad_fn=) 2025-09-09T14:14:30.1124356Z converted model pt2e: GraphModule( 2025-09-09T14:14:30.1124647Z (conv): Module() 2025-09-09T14:14:30.1124861Z (bn): Module() 2025-09-09T14:14:30.1125071Z ) 2025-09-09T14:14:30.1125172Z 2025-09-09T14:14:30.1125176Z 2025-09-09T14:14:30.1125180Z 2025-09-09T14:14:30.1125280Z def forward(self, x): 2025-09-09T14:14:30.1125579Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:30.1125949Z conv_bias = self.conv.bias 2025-09-09T14:14:30.1126272Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:30.1127082Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.016063569113612175, 14, -128, 127, torch.int8); x = None 2025-09-09T14:14:30.1128525Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.016063569113612175, 14, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:30.1129736Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:30.1130269Z _scale_0 = self._scale_0 2025-09-09T14:14:30.1130651Z _zero_point_0 = self._zero_point_0 2025-09-09T14:14:30.1130992Z quantize_per_channel = self._frozen_param0 2025-09-09T14:14:30.1132000Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:14:30.1133585Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias, [2], [4]); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:14:30.1134992Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.015231042169034481, -12, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:14:30.1136488Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.015231042169034481, -12, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:30.1137655Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:14:30.1138125Z 2025-09-09T14:14:30.1138422Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:30.1138844Z onverted model fx: GraphModule( 2025-09-09T14:14:30.1139300Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(2,), padding=(4,)) 2025-09-09T14:14:30.1139767Z ) 2025-09-09T14:14:30.1139874Z 2025-09-09T14:14:30.1139878Z 2025-09-09T14:14:30.1139882Z 2025-09-09T14:14:30.1139972Z def forward(self, x): 2025-09-09T14:14:30.1140779Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.016063569113612175, 14, -128, 127, torch.int8); x = None 2025-09-09T14:14:30.1142237Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.016063569113612175, 14, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:30.1143403Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:14:40.0139491Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.015231042169034481, -12, -128, 127, torch.int8); conv = None 2025-09-09T14:14:40.0141921Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.015231042169034481, -12, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:40.0143294Z return dequantize_per_tensor_default_1 2025-09-09T14:14:40.0143703Z 2025-09-09T14:14:40.0144110Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:40.0144656Z diff: tensor([[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:40.0145032Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:40.0145386Z [0., 0., 0., 0., 0., 0.]]]) 2025-09-09T14:14:40.0145748Z model pt2e: GraphModule( 2025-09-09T14:14:40.0146080Z (conv): Module() 2025-09-09T14:14:40.0146355Z (bn): Module() 2025-09-09T14:14:40.0146777Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0148198Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0161]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:40.0149998Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.276310682296753, max_val=1.8198994398117065) 2025-09-09T14:14:40.0150784Z ) 2025-09-09T14:14:40.0151164Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0152622Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:14:40.0154345Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.32764676213264465, max_val=0.3298276662826538) 2025-09-09T14:14:40.0155189Z ) 2025-09-09T14:14:40.0155578Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0156990Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0152]), zero_point=tensor([-12], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:40.0158690Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.7719206809997559, max_val=2.113234519958496) 2025-09-09T14:14:40.0159463Z ) 2025-09-09T14:14:40.0159694Z ) 2025-09-09T14:14:40.0159839Z 2025-09-09T14:14:40.0159845Z 2025-09-09T14:14:40.0159856Z 2025-09-09T14:14:40.0159972Z def forward(self, x): 2025-09-09T14:14:40.0160361Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:40.0160856Z conv_weight = self.conv.weight 2025-09-09T14:14:40.0161247Z conv_bias = self.conv.bias 2025-09-09T14:14:40.0161598Z bn_weight = self.bn.weight 2025-09-09T14:14:40.0161956Z bn_bias = self.bn.bias 2025-09-09T14:14:40.0162306Z bn_running_mean = self.bn.running_mean 2025-09-09T14:14:40.0162739Z bn_running_var = self.bn.running_var 2025-09-09T14:14:40.0163208Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:40.0163854Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:40.0164722Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:40.0165507Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:14:40.0166226Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:14:40.0166813Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:14:40.0167450Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:14:40.0168172Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:14:40.0169010Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:14:40.0169919Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:14:40.0171519Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like, [2], [4]); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:14:40.0172877Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:14:40.0173662Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:14:40.0174513Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:14:40.0175295Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:14:40.0176289Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:14:40.0177353Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:14:40.0178021Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:14:40.0178461Z 2025-09-09T14:14:40.0178759Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:40.0179175Z model fx: GraphModule( 2025-09-09T14:14:40.0179532Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0180602Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0161]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:40.0181875Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.276310682296753, max_val=1.8198994398117065) 2025-09-09T14:14:40.0182439Z ) 2025-09-09T14:14:40.0182637Z (conv): ConvBn1d( 2025-09-09T14:14:40.0182900Z 3, 3, kernel_size=(3,), stride=(2,), padding=(4,) 2025-09-09T14:14:40.0183391Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:14:40.0183914Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0184965Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:14:40.0186261Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.32764676213264465, max_val=0.3298276662826538) 2025-09-09T14:14:40.0186842Z ) 2025-09-09T14:14:40.0187033Z ) 2025-09-09T14:14:40.0187337Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:40.0188397Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0152]), zero_point=tensor([-12], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:40.0189676Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.7719206809997559, max_val=2.113234519958496) 2025-09-09T14:14:40.0190246Z ) 2025-09-09T14:14:40.0190432Z ) 2025-09-09T14:14:40.0190532Z 2025-09-09T14:14:40.0190537Z 2025-09-09T14:14:40.0190541Z 2025-09-09T14:14:40.0190647Z def forward(self, x): 2025-09-09T14:14:40.0191021Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:40.0191701Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:14:40.0192308Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:14:40.0192793Z return activation_post_process_1 2025-09-09T14:14:40.0193067Z 2025-09-09T14:14:40.0193374Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:40.0193790Z diff: tensor([[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:40.0194076Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:40.0194455Z [0., 0., 0., 0., 0., 0.]]], grad_fn=) 2025-09-09T14:14:40.0194862Z converted model pt2e: GraphModule( 2025-09-09T14:14:40.0195155Z (conv): Module() 2025-09-09T14:14:40.0195366Z (bn): Module() 2025-09-09T14:14:40.0195586Z ) 2025-09-09T14:14:40.0195692Z 2025-09-09T14:14:40.0195696Z 2025-09-09T14:14:40.0195699Z 2025-09-09T14:14:40.0195789Z def forward(self, x): 2025-09-09T14:14:40.0196104Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:40.0196476Z conv_bias = self.conv.bias 2025-09-09T14:14:40.0196797Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:40.0197614Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.016063569113612175, 14, -128, 127, torch.int8); x = None 2025-09-09T14:14:40.0199051Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.016063569113612175, 14, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:40.0200261Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:40.0200813Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:14:40.0201716Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.002597068203613162, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:14:40.0203182Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias, [2], [4]); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:14:40.0204561Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.0152359027415514, -12, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:14:40.0206057Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.0152359027415514, -12, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:14:40.0207228Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:14:40.0207682Z 2025-09-09T14:14:40.0207988Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:40.0208399Z onverted model fx: GraphModule( 2025-09-09T14:14:40.0208866Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(2,), padding=(4,)) 2025-09-09T14:14:40.0209339Z ) 2025-09-09T14:14:40.0209442Z 2025-09-09T14:14:40.0209446Z 2025-09-09T14:14:40.0209450Z 2025-09-09T14:14:40.0209540Z def forward(self, x): 2025-09-09T14:14:40.0210413Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.016063569113612175, 14, -128, 127, torch.int8); x = None 2025-09-09T14:14:40.0211846Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.016063569113612175, 14, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:54.2771407Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:14:54.2772770Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0152359027415514, -12, -128, 127, torch.int8); conv = None 2025-09-09T14:14:54.2774498Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0152359027415514, -12, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:54.2775519Z return dequantize_per_tensor_default_1 2025-09-09T14:14:54.2775869Z 2025-09-09T14:14:54.2776173Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:54.2776589Z diff: tensor([[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:54.2776891Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:14:54.2777260Z [0., 0., 0., 0., 0., 0.]]]) 2025-09-09T14:14:54.2777740Z PASSED 2025-09-09T14:14:54.2778425Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_fusion_no_conv_bias model pt2e: GraphModule( 2025-09-09T14:14:54.2779172Z (conv): Module() 2025-09-09T14:14:54.2779383Z (bn): Module() 2025-09-09T14:14:54.2779716Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2780800Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:54.2782142Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:14:54.2782737Z ) 2025-09-09T14:14:54.2783041Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2784174Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0022, 0.0026, 0.0023]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:14:54.2785673Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2639, -0.2941, -0.2608]), max_val=tensor([0.2795, 0.3227, 0.2891])) 2025-09-09T14:14:54.2786406Z ) 2025-09-09T14:14:54.2786732Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2787798Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0169]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:54.2789074Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.9445278644561768, max_val=2.3592891693115234) 2025-09-09T14:14:54.2789673Z ) 2025-09-09T14:14:54.2789854Z ) 2025-09-09T14:14:54.2789960Z 2025-09-09T14:14:54.2789977Z 2025-09-09T14:14:54.2789981Z 2025-09-09T14:14:54.2790076Z def forward(self, x): 2025-09-09T14:14:54.2790383Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:54.2790771Z conv_weight = self.conv.weight 2025-09-09T14:14:54.2791083Z bn_weight = self.bn.weight 2025-09-09T14:14:54.2791356Z bn_bias = self.bn.bias 2025-09-09T14:14:54.2791648Z bn_running_mean = self.bn.running_mean 2025-09-09T14:14:54.2791975Z bn_running_var = self.bn.running_var 2025-09-09T14:14:54.2792350Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:54.2792837Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:54.2793507Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:54.2794093Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:14:54.2794536Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:14:54.2795089Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:14:54.2795567Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:14:54.2796132Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:14:54.2796760Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:14:54.2797827Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:14:54.2798768Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:14:54.2799356Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:14:54.2800372Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:14:54.2801531Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:14:54.2802210Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:14:54.2802651Z 2025-09-09T14:14:54.2802947Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:54.2803359Z model fx: GraphModule( 2025-09-09T14:14:54.2803699Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2804778Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:54.2806051Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:14:54.2806645Z ) 2025-09-09T14:14:54.2806845Z (conv): ConvBn1d( 2025-09-09T14:14:54.2807097Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:14:54.2807588Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:14:54.2808099Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2809210Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0022, 0.0026, 0.0023]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:14:54.2810936Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2639, -0.2941, -0.2608]), max_val=tensor([0.2795, 0.3227, 0.2891])) 2025-09-09T14:14:54.2811674Z ) 2025-09-09T14:14:54.2811869Z ) 2025-09-09T14:14:54.2812158Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:14:54.2813247Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0169]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:14:54.2814525Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.9445278644561768, max_val=2.3592891693115234) 2025-09-09T14:14:54.2815095Z ) 2025-09-09T14:14:54.2815285Z ) 2025-09-09T14:14:54.2815389Z 2025-09-09T14:14:54.2815394Z 2025-09-09T14:14:54.2815397Z 2025-09-09T14:14:54.2815488Z def forward(self, x): 2025-09-09T14:14:54.2815880Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:14:54.2816468Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:14:54.2817088Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:14:54.2817575Z return activation_post_process_1 2025-09-09T14:14:54.2817861Z 2025-09-09T14:14:54.2818170Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:54.2818569Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:14:54.2818830Z [0., 0., 0.], 2025-09-09T14:14:54.2819048Z [0., 0., 0.]], 2025-09-09T14:14:54.2819204Z 2025-09-09T14:14:54.2819282Z [[0., 0., 0.], 2025-09-09T14:14:54.2819495Z [0., 0., 0.], 2025-09-09T14:14:54.2819722Z [0., 0., 0.]], 2025-09-09T14:14:54.2819864Z 2025-09-09T14:14:54.2820097Z [[0., 0., 0.], 2025-09-09T14:14:54.2820318Z [0., 0., 0.], 2025-09-09T14:14:54.2820576Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:14:54.2820904Z converted model pt2e: GraphModule( 2025-09-09T14:14:54.2821200Z (conv): Module() 2025-09-09T14:14:54.2821409Z (bn): Module() 2025-09-09T14:14:54.2821622Z ) 2025-09-09T14:14:54.2821722Z 2025-09-09T14:14:54.2821726Z 2025-09-09T14:14:54.2821730Z 2025-09-09T14:14:54.2821819Z def forward(self, x): 2025-09-09T14:14:54.2822220Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:14:54.2822639Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:14:54.2823445Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:14:54.2824889Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:14:54.2826082Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:14:54.2826607Z _scale_0 = self._scale_0 2025-09-09T14:14:54.2826894Z _zero_point_0 = self._zero_point_0 2025-09-09T14:14:54.2827214Z quantize_per_channel = self._frozen_param0 2025-09-09T14:14:54.2828241Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:14:54.2829401Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:14:54.2830627Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_weight_bias = None 2025-09-09T14:14:54.2832092Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.016877712681889534, -13, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:14:54.2833596Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.016877712681889534, -13, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:14:54.2834838Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:14:54.2835309Z 2025-09-09T14:14:54.2835619Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:14:54.2836042Z onverted model fx: GraphModule( 2025-09-09T14:14:54.2836452Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:14:54.2836874Z ) 2025-09-09T14:14:54.2836976Z 2025-09-09T14:14:54.2836981Z 2025-09-09T14:14:54.2836985Z 2025-09-09T14:14:54.2837076Z def forward(self, x): 2025-09-09T14:14:54.2837793Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:04.2010976Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:04.2012206Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:15:04.2013228Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.016877712681889534, -13, -128, 127, torch.int8); conv = None 2025-09-09T14:15:04.2014710Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.016877712681889534, -13, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:04.2015741Z return dequantize_per_tensor_default_1 2025-09-09T14:15:04.2016340Z 2025-09-09T14:15:04.2016667Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:04.2017089Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:15:04.2017335Z [0., 0., 0.], 2025-09-09T14:15:04.2017567Z [0., 0., 0.]], 2025-09-09T14:15:04.2017712Z 2025-09-09T14:15:04.2017790Z [[0., 0., 0.], 2025-09-09T14:15:04.2018021Z [0., 0., 0.], 2025-09-09T14:15:04.2018232Z [0., 0., 0.]], 2025-09-09T14:15:04.2018389Z 2025-09-09T14:15:04.2018577Z [[0., 0., 0.], 2025-09-09T14:15:04.2018791Z [0., 0., 0.], 2025-09-09T14:15:04.2019017Z [0., 0., 0.]]]) 2025-09-09T14:15:04.2019257Z model pt2e: GraphModule( 2025-09-09T14:15:04.2019511Z (conv): Module() 2025-09-09T14:15:04.2019736Z (bn): Module() 2025-09-09T14:15:04.2020056Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2021149Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:04.2022434Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:04.2023034Z ) 2025-09-09T14:15:04.2023331Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2024420Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:04.2025717Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940981984138489, max_val=0.32268622517585754) 2025-09-09T14:15:04.2026294Z ) 2025-09-09T14:15:04.2026603Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2027668Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0169]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:04.2028943Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.9461743831634521, max_val=2.3577661514282227) 2025-09-09T14:15:04.2029535Z ) 2025-09-09T14:15:04.2029714Z ) 2025-09-09T14:15:04.2029826Z 2025-09-09T14:15:04.2029831Z 2025-09-09T14:15:04.2029834Z 2025-09-09T14:15:04.2029925Z def forward(self, x): 2025-09-09T14:15:04.2030235Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:04.2030620Z conv_weight = self.conv.weight 2025-09-09T14:15:04.2030927Z bn_weight = self.bn.weight 2025-09-09T14:15:04.2031195Z bn_bias = self.bn.bias 2025-09-09T14:15:04.2031481Z bn_running_mean = self.bn.running_mean 2025-09-09T14:15:04.2031804Z bn_running_var = self.bn.running_var 2025-09-09T14:15:04.2032177Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:15:04.2032659Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:04.2033325Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:15:04.2033907Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:15:04.2034341Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:15:04.2034904Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:15:04.2035388Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:15:04.2035958Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:15:04.2036632Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:15:04.2037713Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:15:04.2038662Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:15:04.2039248Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:15:04.2040262Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:15:04.2041329Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:15:04.2042063Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:15:04.2042504Z 2025-09-09T14:15:04.2042797Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:04.2043206Z model fx: GraphModule( 2025-09-09T14:15:04.2043560Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2044632Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:04.2045916Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:04.2046488Z ) 2025-09-09T14:15:04.2046685Z (conv): ConvBn1d( 2025-09-09T14:15:04.2046937Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:15:04.2047426Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:04.2047949Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2048992Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:04.2050287Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940981984138489, max_val=0.32268622517585754) 2025-09-09T14:15:04.2050880Z ) 2025-09-09T14:15:04.2051071Z ) 2025-09-09T14:15:04.2051377Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:04.2052444Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0169]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:04.2053726Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.9461743831634521, max_val=2.3577661514282227) 2025-09-09T14:15:04.2054307Z ) 2025-09-09T14:15:04.2054495Z ) 2025-09-09T14:15:04.2054596Z 2025-09-09T14:15:04.2054600Z 2025-09-09T14:15:04.2054605Z 2025-09-09T14:15:04.2054706Z def forward(self, x): 2025-09-09T14:15:04.2055083Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:04.2055686Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:15:04.2056294Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:15:04.2056775Z return activation_post_process_1 2025-09-09T14:15:04.2057053Z 2025-09-09T14:15:04.2057358Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:04.2057763Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:15:04.2058026Z [0., 0., 0.], 2025-09-09T14:15:04.2058263Z [0., 0., 0.]], 2025-09-09T14:15:04.2058408Z 2025-09-09T14:15:04.2058488Z [[0., 0., 0.], 2025-09-09T14:15:04.2058717Z [0., 0., 0.], 2025-09-09T14:15:04.2058932Z [0., 0., 0.]], 2025-09-09T14:15:04.2059087Z 2025-09-09T14:15:04.2059168Z [[0., 0., 0.], 2025-09-09T14:15:04.2059382Z [0., 0., 0.], 2025-09-09T14:15:04.2059637Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:15:04.2060043Z converted model pt2e: GraphModule( 2025-09-09T14:15:04.2060346Z (conv): Module() 2025-09-09T14:15:04.2060576Z (bn): Module() 2025-09-09T14:15:04.2060782Z ) 2025-09-09T14:15:04.2060884Z 2025-09-09T14:15:04.2060888Z 2025-09-09T14:15:04.2060893Z 2025-09-09T14:15:04.2061002Z def forward(self, x): 2025-09-09T14:15:04.2061301Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:04.2061719Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:15:04.2062522Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:04.2064032Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:04.2065236Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:15:04.2065786Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:15:04.2066725Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0025408363435417414, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:15:04.2067641Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:15:04.2068603Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_weight_bias = None 2025-09-09T14:15:04.2070038Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.016878198832273483, -13, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:15:04.2071565Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.016878198832273483, -13, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:15:04.2072741Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:15:04.2073198Z 2025-09-09T14:15:04.2073517Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:04.2073932Z onverted model fx: GraphModule( 2025-09-09T14:15:04.2074359Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:04.2074861Z ) 2025-09-09T14:15:04.2074970Z 2025-09-09T14:15:04.2074974Z 2025-09-09T14:15:04.2074982Z 2025-09-09T14:15:04.2075074Z def forward(self, x): 2025-09-09T14:15:04.2075791Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:05.8175746Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:05.8176980Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:15:05.8177981Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.016878198832273483, -13, -128, 127, torch.int8); conv = None 2025-09-09T14:15:05.8179504Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.016878198832273483, -13, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:05.8180537Z return dequantize_per_tensor_default_1 2025-09-09T14:15:05.8180853Z 2025-09-09T14:15:05.8181160Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:05.8181581Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:15:05.8181836Z [0., 0., 0.], 2025-09-09T14:15:05.8182078Z [0., 0., 0.]], 2025-09-09T14:15:05.8182225Z 2025-09-09T14:15:05.8182308Z [[0., 0., 0.], 2025-09-09T14:15:05.8182886Z [0., 0., 0.], 2025-09-09T14:15:05.8183126Z [0., 0., 0.]], 2025-09-09T14:15:05.8183273Z 2025-09-09T14:15:05.8183356Z [[0., 0., 0.], 2025-09-09T14:15:05.8183591Z [0., 0., 0.], 2025-09-09T14:15:05.8183813Z [0., 0., 0.]]]) 2025-09-09T14:15:05.8184074Z model pt2e: GraphModule( 2025-09-09T14:15:05.8184323Z (conv1): Module() 2025-09-09T14:15:05.8184552Z (bn1): Module() 2025-09-09T14:15:05.8184770Z (conv2): Module() 2025-09-09T14:15:05.8184994Z (bn2): Module() 2025-09-09T14:15:05.8185427Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8186520Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:05.8187826Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:05.8188407Z ) 2025-09-09T14:15:05.8188716Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8189848Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025, 0.0020, 0.0022]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:05.8191343Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3119, -0.2563, -0.2799]), max_val=tensor([0.3101, 0.1970, 0.1855])) 2025-09-09T14:15:05.8192100Z ) 2025-09-09T14:15:05.8192400Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8193547Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:05.8195148Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.1376, 0.2760, 0.3298])) 2025-09-09T14:15:05.8195892Z ) 2025-09-09T14:15:05.8196205Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8197277Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0132]), zero_point=tensor([-3], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:05.8198566Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.6533392667770386, max_val=1.7188055515289307) 2025-09-09T14:15:05.8199162Z ) 2025-09-09T14:15:05.8199461Z (activation_post_process_4): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8200544Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0110]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:05.8201805Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.403315544128418, max_val=1.3918161392211914) 2025-09-09T14:15:05.8202399Z ) 2025-09-09T14:15:05.8202586Z ) 2025-09-09T14:15:05.8202704Z 2025-09-09T14:15:05.8202710Z 2025-09-09T14:15:05.8202713Z 2025-09-09T14:15:05.8202805Z def forward(self, x): 2025-09-09T14:15:05.8203124Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:05.8203502Z conv1_weight = self.conv1.weight 2025-09-09T14:15:05.8203823Z bn1_weight = self.bn1.weight 2025-09-09T14:15:05.8204104Z bn1_bias = self.bn1.bias 2025-09-09T14:15:05.8204392Z conv2_weight = self.conv2.weight 2025-09-09T14:15:05.8204693Z conv2_bias = self.conv2.bias 2025-09-09T14:15:05.8204990Z bn2_weight = self.bn2.weight 2025-09-09T14:15:05.8205283Z bn2_bias = self.bn2.bias 2025-09-09T14:15:05.8205664Z bn1_running_mean = self.bn1.running_mean 2025-09-09T14:15:05.8206018Z bn1_running_var = self.bn1.running_var 2025-09-09T14:15:05.8206390Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:15:05.8206791Z bn2_running_mean = self.bn2.running_mean 2025-09-09T14:15:05.8207126Z bn2_running_var = self.bn2.running_var 2025-09-09T14:15:05.8207513Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:15:05.8208008Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:05.8208746Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:15:05.8209511Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:15:05.8210316Z add = torch.ops.aten.add.Tensor(bn2_running_var, 1e-05) 2025-09-09T14:15:05.8210769Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:15:05.8211227Z div = torch.ops.aten.div.Tensor(bn2_weight, sqrt); sqrt = None 2025-09-09T14:15:05.8211731Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:15:05.8212294Z mul = torch.ops.aten.mul.Tensor(conv2_weight, reshape); conv2_weight = reshape = None 2025-09-09T14:15:05.8212945Z activation_post_process_3 = self.activation_post_process_3(mul); mul = None 2025-09-09T14:15:05.8213651Z zeros_like = torch.ops.aten.zeros_like.default(conv2_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:15:05.8214269Z add_2 = torch.ops.aten.add.Tensor(bn1_running_var, 1e-05) 2025-09-09T14:15:05.8214734Z sqrt_1 = torch.ops.aten.sqrt.default(add_2); add_2 = None 2025-09-09T14:15:05.8215219Z div_2 = torch.ops.aten.div.Tensor(bn1_weight, sqrt_1); sqrt_1 = None 2025-09-09T14:15:05.8215740Z reshape_3 = torch.ops.aten.reshape.default(div_2, [-1, 1, 1]) 2025-09-09T14:15:05.8216338Z mul_1 = torch.ops.aten.mul.Tensor(conv1_weight, reshape_3); conv1_weight = reshape_3 = None 2025-09-09T14:15:05.8217003Z activation_post_process_1 = self.activation_post_process_1(mul_1); mul_1 = None 2025-09-09T14:15:05.8217980Z conv1d_3 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:15:05.8218924Z reshape_4 = torch.ops.aten.reshape.default(div_2, [1, -1, 1]); div_2 = None 2025-09-09T14:15:05.8219542Z div_3 = torch.ops.aten.div.Tensor(conv1d_3, reshape_4); conv1d_3 = reshape_4 = None 2025-09-09T14:15:05.8220610Z batch_norm_3 = torch.ops.aten.batch_norm.default(div_3, bn1_weight, bn1_bias, bn1_running_mean, bn1_running_var, True, 0.1, 1e-05, True); div_3 = bn1_weight = bn1_bias = bn1_running_mean = bn1_running_var = None 2025-09-09T14:15:05.8221710Z activation_post_process_2 = self.activation_post_process_2(batch_norm_3); batch_norm_3 = None 2025-09-09T14:15:05.8222822Z conv1d_2 = torch.ops.aten.conv1d.default(activation_post_process_2, activation_post_process_3, zeros_like); activation_post_process_2 = activation_post_process_3 = zeros_like = None 2025-09-09T14:15:05.8223828Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:15:05.8224435Z div_1 = torch.ops.aten.div.Tensor(conv1d_2, reshape_1); conv1d_2 = reshape_1 = None 2025-09-09T14:15:05.8225097Z reshape_2 = torch.ops.aten.reshape.default(conv2_bias, [1, -1, 1]); conv2_bias = None 2025-09-09T14:15:05.8225713Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:15:05.8226751Z batch_norm_2 = torch.ops.aten.batch_norm.default(add_1, bn2_weight, bn2_bias, bn2_running_mean, bn2_running_var, True, 0.1, 1e-05, True); add_1 = bn2_weight = bn2_bias = bn2_running_mean = bn2_running_var = None 2025-09-09T14:15:05.8227849Z activation_post_process_4 = self.activation_post_process_4(batch_norm_2); batch_norm_2 = None 2025-09-09T14:15:05.8228536Z return pytree.tree_unflatten((activation_post_process_4,), self._out_spec) 2025-09-09T14:15:05.8229102Z 2025-09-09T14:15:05.8229408Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:05.8229828Z model fx: GraphModule( 2025-09-09T14:15:05.8230177Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8231265Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:05.8232645Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:05.8233232Z ) 2025-09-09T14:15:05.8233438Z (conv1): ConvBn1d( 2025-09-09T14:15:05.8233706Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:15:05.8234198Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:05.8234808Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:05.8235942Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:05.8237451Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.1376, 0.2760, 0.3298])) 2025-09-09T14:15:05.8238190Z ) 2025-09-09T14:15:05.8238400Z ) 2025-09-09T14:15:05.8238701Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9845355Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0132]), zero_point=tensor([-3], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:15.9847180Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.6533392667770386, max_val=1.7188055515289307) 2025-09-09T14:15:15.9847987Z ) 2025-09-09T14:15:15.9848238Z (conv2): ConvBn1d( 2025-09-09T14:15:15.9848572Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:15:15.9849158Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:15.9849856Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9851321Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025, 0.0020, 0.0022]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:15.9853356Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3119, -0.2563, -0.2799]), max_val=tensor([0.3101, 0.1970, 0.1855])) 2025-09-09T14:15:15.9854357Z ) 2025-09-09T14:15:15.9854596Z ) 2025-09-09T14:15:15.9854999Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9856426Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0110]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:15.9858160Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.403315544128418, max_val=1.3918161392211914) 2025-09-09T14:15:15.9858951Z ) 2025-09-09T14:15:15.9859189Z ) 2025-09-09T14:15:15.9859327Z 2025-09-09T14:15:15.9859339Z 2025-09-09T14:15:15.9859344Z 2025-09-09T14:15:15.9859477Z def forward(self, x): 2025-09-09T14:15:15.9859944Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:15.9860566Z conv1 = self.conv1(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:15:15.9861199Z activation_post_process_1 = self.activation_post_process_1(conv1); conv1 = None 2025-09-09T14:15:15.9862115Z conv2 = self.conv2(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:15:15.9862760Z activation_post_process_2 = self.activation_post_process_2(conv2); conv2 = None 2025-09-09T14:15:15.9863243Z return activation_post_process_2 2025-09-09T14:15:15.9863539Z 2025-09-09T14:15:15.9863842Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:15.9864259Z diff: tensor([[[0.], 2025-09-09T14:15:15.9864530Z [0.], 2025-09-09T14:15:15.9864733Z [0.]], 2025-09-09T14:15:15.9864861Z 2025-09-09T14:15:15.9865092Z [[0.], 2025-09-09T14:15:15.9865291Z [0.], 2025-09-09T14:15:15.9865503Z [0.]], 2025-09-09T14:15:15.9865630Z 2025-09-09T14:15:15.9865711Z [[0.], 2025-09-09T14:15:15.9865924Z [0.], 2025-09-09T14:15:15.9866148Z [0.]]], grad_fn=) 2025-09-09T14:15:15.9866482Z converted model pt2e: GraphModule( 2025-09-09T14:15:15.9866784Z (conv1): Module() 2025-09-09T14:15:15.9866999Z (bn1): Module() 2025-09-09T14:15:15.9867229Z (conv2): Module() 2025-09-09T14:15:15.9867443Z (bn2): Module() 2025-09-09T14:15:15.9867663Z ) 2025-09-09T14:15:15.9867766Z 2025-09-09T14:15:15.9867770Z 2025-09-09T14:15:15.9867774Z 2025-09-09T14:15:15.9867868Z def forward(self, x): 2025-09-09T14:15:15.9868184Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:15.9868555Z conv2_bias = self.conv2.bias 2025-09-09T14:15:15.9868907Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:15:15.9869336Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:15:15.9870155Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:15.9871604Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:15.9872804Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:15:15.9873572Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:15:15.9874117Z _scale_0 = self._scale_0 2025-09-09T14:15:15.9874395Z _zero_point_0 = self._zero_point_0 2025-09-09T14:15:15.9874793Z _scale_1 = self._scale_1 2025-09-09T14:15:15.9875074Z _zero_point_1 = self._zero_point_1 2025-09-09T14:15:15.9875427Z quantize_per_channel_1 = self._frozen_param0 2025-09-09T14:15:15.9876465Z dequantize_per_channel_1 = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_1, _scale_1, _zero_point_1, 0, -127, 127, torch.int8); quantize_per_channel_1 = _scale_1 = _zero_point_1 = None 2025-09-09T14:15:15.9877525Z conv1_weight_bias = self.conv1.weight_bias 2025-09-09T14:15:15.9878533Z conv1d_5 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel_1, conv1_weight_bias); dequantize_per_tensor_default = dequantize_per_channel_1 = conv1_weight_bias = None 2025-09-09T14:15:15.9880018Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_5, 0.013224096968770027, -3, -128, 127, torch.int8); conv1d_5 = None 2025-09-09T14:15:15.9881542Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.013224096968770027, -3, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:15.9882577Z quantize_per_channel = self._frozen_param1 2025-09-09T14:15:15.9883587Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:15:15.9885270Z conv1d_4 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_1, dequantize_per_channel, conv2_bias); dequantize_per_tensor_default_1 = dequantize_per_channel = conv2_bias = None 2025-09-09T14:15:15.9886689Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_4, 0.010961300693452358, 0, -128, 127, torch.int8); conv1d_4 = None 2025-09-09T14:15:15.9888193Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010961300693452358, 0, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:15:15.9889924Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:15:15.9890393Z 2025-09-09T14:15:15.9890715Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:15.9891150Z onverted model fx: GraphModule( 2025-09-09T14:15:15.9891587Z (conv1): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:15.9892174Z (conv2): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:15.9892590Z ) 2025-09-09T14:15:15.9892712Z 2025-09-09T14:15:15.9892717Z 2025-09-09T14:15:15.9892722Z 2025-09-09T14:15:15.9892815Z def forward(self, x): 2025-09-09T14:15:15.9893525Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:15.9894968Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:15.9896157Z conv1 = self.conv1(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:15:15.9897150Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1, 0.013224096968770027, -3, -128, 127, torch.int8); conv1 = None 2025-09-09T14:15:15.9898649Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.013224096968770027, -3, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:15.9899858Z conv2 = self.conv2(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:15:15.9900855Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2, 0.010961300693452358, 0, -128, 127, torch.int8); conv2 = None 2025-09-09T14:15:15.9902342Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010961300693452358, 0, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:15:15.9903373Z return dequantize_per_tensor_default_2 2025-09-09T14:15:15.9903676Z 2025-09-09T14:15:15.9903998Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:15.9904398Z diff: tensor([[[0.], 2025-09-09T14:15:15.9904639Z [0.], 2025-09-09T14:15:15.9904843Z [0.]], 2025-09-09T14:15:15.9904988Z 2025-09-09T14:15:15.9905068Z [[0.], 2025-09-09T14:15:15.9905267Z [0.], 2025-09-09T14:15:15.9905480Z [0.]], 2025-09-09T14:15:15.9905606Z 2025-09-09T14:15:15.9905699Z [[0.], 2025-09-09T14:15:15.9905898Z [0.], 2025-09-09T14:15:15.9906108Z [0.]]]) 2025-09-09T14:15:15.9906333Z model pt2e: GraphModule( 2025-09-09T14:15:15.9906597Z (conv1): Module() 2025-09-09T14:15:15.9906814Z (bn1): Module() 2025-09-09T14:15:15.9907042Z (conv2): Module() 2025-09-09T14:15:15.9907255Z (bn2): Module() 2025-09-09T14:15:15.9907588Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9908662Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:15.9910281Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:15.9910892Z ) 2025-09-09T14:15:15.9911188Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9912286Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:15.9913589Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31192728877067566, max_val=0.31014329195022583) 2025-09-09T14:15:15.9914261Z ) 2025-09-09T14:15:15.9914570Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:15.9915748Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:24.6686520Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.32764676213264465, max_val=0.3298276662826538) 2025-09-09T14:15:24.6687411Z ) 2025-09-09T14:15:24.6687816Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6689257Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0132]), zero_point=tensor([-3], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:24.6691001Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.652099370956421, max_val=1.720017671585083) 2025-09-09T14:15:24.6691777Z ) 2025-09-09T14:15:24.6692160Z (activation_post_process_4): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6693606Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:24.6695309Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4020289182662964, max_val=1.3896838426589966) 2025-09-09T14:15:24.6696086Z ) 2025-09-09T14:15:24.6696334Z ) 2025-09-09T14:15:24.6696468Z 2025-09-09T14:15:24.6696473Z 2025-09-09T14:15:24.6696477Z 2025-09-09T14:15:24.6696598Z def forward(self, x): 2025-09-09T14:15:24.6697009Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:24.6697495Z conv1_weight = self.conv1.weight 2025-09-09T14:15:24.6697912Z bn1_weight = self.bn1.weight 2025-09-09T14:15:24.6698289Z bn1_bias = self.bn1.bias 2025-09-09T14:15:24.6698646Z conv2_weight = self.conv2.weight 2025-09-09T14:15:24.6699049Z conv2_bias = self.conv2.bias 2025-09-09T14:15:24.6699417Z bn2_weight = self.bn2.weight 2025-09-09T14:15:24.6699789Z bn2_bias = self.bn2.bias 2025-09-09T14:15:24.6700162Z bn1_running_mean = self.bn1.running_mean 2025-09-09T14:15:24.6700617Z bn1_running_var = self.bn1.running_var 2025-09-09T14:15:24.6701101Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:15:24.6701616Z bn2_running_mean = self.bn2.running_mean 2025-09-09T14:15:24.6702045Z bn2_running_var = self.bn2.running_var 2025-09-09T14:15:24.6702537Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:15:24.6703201Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:24.6704081Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:15:24.6705097Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:15:24.6705892Z add = torch.ops.aten.add.Tensor(bn2_running_var, 1e-05) 2025-09-09T14:15:24.6706470Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:15:24.6707060Z div = torch.ops.aten.div.Tensor(bn2_weight, sqrt); sqrt = None 2025-09-09T14:15:24.6708031Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:15:24.6708794Z mul = torch.ops.aten.mul.Tensor(conv2_weight, reshape); conv2_weight = reshape = None 2025-09-09T14:15:24.6709634Z activation_post_process_3 = self.activation_post_process_3(mul); mul = None 2025-09-09T14:15:24.6710756Z zeros_like = torch.ops.aten.zeros_like.default(conv2_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:15:24.6711572Z add_2 = torch.ops.aten.add.Tensor(bn1_running_var, 1e-05) 2025-09-09T14:15:24.6712306Z sqrt_1 = torch.ops.aten.sqrt.default(add_2); add_2 = None 2025-09-09T14:15:24.6712945Z div_2 = torch.ops.aten.div.Tensor(bn1_weight, sqrt_1); sqrt_1 = None 2025-09-09T14:15:24.6713609Z reshape_3 = torch.ops.aten.reshape.default(div_2, [-1, 1, 1]) 2025-09-09T14:15:24.6714395Z mul_1 = torch.ops.aten.mul.Tensor(conv1_weight, reshape_3); conv1_weight = reshape_3 = None 2025-09-09T14:15:24.6715341Z activation_post_process_1 = self.activation_post_process_1(mul_1); mul_1 = None 2025-09-09T14:15:24.6716650Z conv1d_3 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:15:24.6717933Z reshape_4 = torch.ops.aten.reshape.default(div_2, [1, -1, 1]); div_2 = None 2025-09-09T14:15:24.6718732Z div_3 = torch.ops.aten.div.Tensor(conv1d_3, reshape_4); conv1d_3 = reshape_4 = None 2025-09-09T14:15:24.6720144Z batch_norm_3 = torch.ops.aten.batch_norm.default(div_3, bn1_weight, bn1_bias, bn1_running_mean, bn1_running_var, True, 0.1, 1e-05, True); div_3 = bn1_weight = bn1_bias = bn1_running_mean = bn1_running_var = None 2025-09-09T14:15:24.6721620Z activation_post_process_2 = self.activation_post_process_2(batch_norm_3); batch_norm_3 = None 2025-09-09T14:15:24.6723100Z conv1d_2 = torch.ops.aten.conv1d.default(activation_post_process_2, activation_post_process_3, zeros_like); activation_post_process_2 = activation_post_process_3 = zeros_like = None 2025-09-09T14:15:24.6724419Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:15:24.6732996Z div_1 = torch.ops.aten.div.Tensor(conv1d_2, reshape_1); conv1d_2 = reshape_1 = None 2025-09-09T14:15:24.6733805Z reshape_2 = torch.ops.aten.reshape.default(conv2_bias, [1, -1, 1]); conv2_bias = None 2025-09-09T14:15:24.6734442Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:15:24.6735492Z batch_norm_2 = torch.ops.aten.batch_norm.default(add_1, bn2_weight, bn2_bias, bn2_running_mean, bn2_running_var, True, 0.1, 1e-05, True); add_1 = bn2_weight = bn2_bias = bn2_running_mean = bn2_running_var = None 2025-09-09T14:15:24.6736636Z activation_post_process_4 = self.activation_post_process_4(batch_norm_2); batch_norm_2 = None 2025-09-09T14:15:24.6737312Z return pytree.tree_unflatten((activation_post_process_4,), self._out_spec) 2025-09-09T14:15:24.6737767Z 2025-09-09T14:15:24.6738085Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:24.6738506Z model fx: GraphModule( 2025-09-09T14:15:24.6738858Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6739948Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0188]), zero_point=tensor([-45], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:24.6741252Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5603605508804321, max_val=3.2356624603271484) 2025-09-09T14:15:24.6741844Z ) 2025-09-09T14:15:24.6742051Z (conv1): ConvBn1d( 2025-09-09T14:15:24.6742318Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:15:24.6742808Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:24.6743342Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6744597Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:24.6745921Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.32764676213264465, max_val=0.3298276662826538) 2025-09-09T14:15:24.6746510Z ) 2025-09-09T14:15:24.6746711Z ) 2025-09-09T14:15:24.6747014Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6748180Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0132]), zero_point=tensor([-3], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:24.6749456Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.652099370956421, max_val=1.720017671585083) 2025-09-09T14:15:24.6750035Z ) 2025-09-09T14:15:24.6750246Z (conv2): ConvBn1d( 2025-09-09T14:15:24.6750496Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:15:24.6750966Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:24.6751497Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6752552Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:24.6753867Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31192728877067566, max_val=0.31014329195022583) 2025-09-09T14:15:24.6754466Z ) 2025-09-09T14:15:24.6754668Z ) 2025-09-09T14:15:24.6755055Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:24.6756147Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:24.6757432Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4020289182662964, max_val=1.3896838426589966) 2025-09-09T14:15:24.6758019Z ) 2025-09-09T14:15:24.6758221Z ) 2025-09-09T14:15:24.6758326Z 2025-09-09T14:15:24.6758330Z 2025-09-09T14:15:24.6758335Z 2025-09-09T14:15:24.6758428Z def forward(self, x): 2025-09-09T14:15:24.6758831Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:24.6759452Z conv1 = self.conv1(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:15:24.6760077Z activation_post_process_1 = self.activation_post_process_1(conv1); conv1 = None 2025-09-09T14:15:24.6760718Z conv2 = self.conv2(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:15:24.6761344Z activation_post_process_2 = self.activation_post_process_2(conv2); conv2 = None 2025-09-09T14:15:24.6761842Z return activation_post_process_2 2025-09-09T14:15:24.6762124Z 2025-09-09T14:15:24.6762435Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:24.6762850Z diff: tensor([[[0.], 2025-09-09T14:15:24.6763104Z [0.], 2025-09-09T14:15:24.6763321Z [0.]], 2025-09-09T14:15:24.6763449Z 2025-09-09T14:15:24.6763530Z [[0.], 2025-09-09T14:15:24.6763744Z [0.], 2025-09-09T14:15:24.6763945Z [0.]], 2025-09-09T14:15:24.6764090Z 2025-09-09T14:15:24.6764172Z [[0.], 2025-09-09T14:15:24.6764371Z [0.], 2025-09-09T14:15:24.6764615Z [0.]]], grad_fn=) 2025-09-09T14:15:24.6764953Z converted model pt2e: GraphModule( 2025-09-09T14:15:24.6765240Z (conv1): Module() 2025-09-09T14:15:24.6765476Z (bn1): Module() 2025-09-09T14:15:24.6765692Z (conv2): Module() 2025-09-09T14:15:24.6765922Z (bn2): Module() 2025-09-09T14:15:24.6766129Z ) 2025-09-09T14:15:24.6766250Z 2025-09-09T14:15:24.6766366Z 2025-09-09T14:15:24.6766370Z 2025-09-09T14:15:24.6766467Z def forward(self, x): 2025-09-09T14:15:24.6766777Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:24.6767169Z conv2_bias = self.conv2.bias 2025-09-09T14:15:24.6767531Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:15:24.6767950Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:15:24.6768789Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:38.6379592Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:38.6380844Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:15:38.6381646Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:15:38.6382227Z quantize_per_tensor_1 = self._frozen_param0 2025-09-09T14:15:38.6383191Z dequantize_per_tensor_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_1, 0.002597068203613162, 0, -127, 127, torch.int8); quantize_per_tensor_1 = None 2025-09-09T14:15:38.6384120Z conv1_weight_bias = self.conv1.weight_bias 2025-09-09T14:15:38.6385117Z conv1d_5 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor_1, conv1_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor_1 = conv1_weight_bias = None 2025-09-09T14:15:38.6386595Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_5, 0.013223988004028797, -3, -128, 127, torch.int8); conv1d_5 = None 2025-09-09T14:15:38.6388120Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.013223988004028797, -3, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:15:38.6389163Z quantize_per_tensor = self._frozen_param1 2025-09-09T14:15:38.6390078Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0024561204481869936, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:15:38.6391571Z conv1d_4 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_3, dequantize_per_tensor, conv2_bias); dequantize_per_tensor_default_3 = dequantize_per_tensor = conv2_bias = None 2025-09-09T14:15:38.6392979Z quantize_per_tensor_default_4 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_4, 0.010947893373668194, 0, -128, 127, torch.int8); conv1d_4 = None 2025-09-09T14:15:38.6394470Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_4, 0.010947893373668194, 0, -128, 127, torch.int8); quantize_per_tensor_default_4 = None 2025-09-09T14:15:38.6395729Z return pytree.tree_unflatten((dequantize_per_tensor_default_4,), self._out_spec) 2025-09-09T14:15:38.6396210Z 2025-09-09T14:15:38.6396517Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:38.6396950Z onverted model fx: GraphModule( 2025-09-09T14:15:38.6397368Z (conv1): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:38.6397942Z (conv2): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:38.6398367Z ) 2025-09-09T14:15:38.6398489Z 2025-09-09T14:15:38.6398493Z 2025-09-09T14:15:38.6398497Z 2025-09-09T14:15:38.6398593Z def forward(self, x): 2025-09-09T14:15:38.6399314Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01880793273448944, -45, -128, 127, torch.int8); x = None 2025-09-09T14:15:38.6401053Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01880793273448944, -45, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:38.6402245Z conv1 = self.conv1(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:15:38.6403236Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1, 0.013223988004028797, -3, -128, 127, torch.int8); conv1 = None 2025-09-09T14:15:38.6404727Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.013223988004028797, -3, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:38.6406049Z conv2 = self.conv2(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:15:38.6407045Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2, 0.010947893373668194, 0, -128, 127, torch.int8); conv2 = None 2025-09-09T14:15:38.6408539Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010947893373668194, 0, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:15:38.6409567Z return dequantize_per_tensor_default_2 2025-09-09T14:15:38.6409872Z 2025-09-09T14:15:38.6410421Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:38.6410825Z diff: tensor([[[0.], 2025-09-09T14:15:38.6411087Z [0.], 2025-09-09T14:15:38.6411292Z [0.]], 2025-09-09T14:15:38.6411425Z 2025-09-09T14:15:38.6411518Z [[0.], 2025-09-09T14:15:38.6411718Z [0.], 2025-09-09T14:15:38.6411927Z [0.]], 2025-09-09T14:15:38.6412052Z 2025-09-09T14:15:38.6412133Z [[0.], 2025-09-09T14:15:38.6412343Z [0.], 2025-09-09T14:15:38.6412541Z [0.]]]) 2025-09-09T14:15:38.6412961Z PASSED 2025-09-09T14:15:38.6413786Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_per_channel_weight_bias PASSED 2025-09-09T14:15:38.6414923Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_relu_fusion model pt2e: GraphModule( 2025-09-09T14:15:38.6415648Z (conv): Module() 2025-09-09T14:15:38.6415867Z (bn): Module() 2025-09-09T14:15:38.6416209Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:38.6417284Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:38.6418585Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:15:38.6419179Z ) 2025-09-09T14:15:38.6419477Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:38.6420623Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0023, 0.0026, 0.0025]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:38.6422121Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2935, -0.3313, -0.3129]), max_val=tensor([0.2532, 0.1628, 0.3013])) 2025-09-09T14:15:38.6422882Z ) 2025-09-09T14:15:38.6423193Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:38.6424281Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0055]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:38.6425512Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.410499095916748) 2025-09-09T14:15:38.6426036Z ) 2025-09-09T14:15:38.6426374Z ) 2025-09-09T14:15:38.6426483Z 2025-09-09T14:15:38.6426487Z 2025-09-09T14:15:38.6426491Z 2025-09-09T14:15:38.6426603Z def forward(self, x): 2025-09-09T14:15:38.6426917Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:38.6427302Z conv_weight = self.conv.weight 2025-09-09T14:15:38.6427601Z conv_bias = self.conv.bias 2025-09-09T14:15:38.6427896Z bn_weight = self.bn.weight 2025-09-09T14:15:38.6428173Z bn_bias = self.bn.bias 2025-09-09T14:15:38.6428463Z bn_running_mean = self.bn.running_mean 2025-09-09T14:15:38.6428895Z bn_running_var = self.bn.running_var 2025-09-09T14:15:38.6429262Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:15:38.6429770Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:38.6430440Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:15:38.6431050Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:15:38.6431480Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:15:38.6431949Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:15:38.6432443Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:15:38.6433003Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:15:38.6433643Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:15:38.6434337Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:15:38.6435562Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:15:38.6436571Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:15:38.6437192Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:15:38.6437845Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:15:38.6438463Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:15:38.6439465Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:15:38.6440453Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:15:38.6441039Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:15:38.6441658Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:15:38.6442092Z 2025-09-09T14:15:38.6442407Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:38.6442814Z model fx: GraphModule( 2025-09-09T14:15:38.6443172Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:38.6444247Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:48.0844737Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:15:48.0845802Z ) 2025-09-09T14:15:48.0846194Z (conv): ConvBnReLU1d( 2025-09-09T14:15:48.0846517Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:15:48.0846973Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:15:48.0847511Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:48.0848888Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0023, 0.0026, 0.0025]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:15:48.0850415Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2935, -0.3313, -0.3129]), max_val=tensor([0.2532, 0.1628, 0.3013])) 2025-09-09T14:15:48.0851172Z ) 2025-09-09T14:15:48.0851360Z ) 2025-09-09T14:15:48.0851678Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:48.0852759Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0055]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:48.0854111Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.410499095916748) 2025-09-09T14:15:48.0854652Z ) 2025-09-09T14:15:48.0854838Z ) 2025-09-09T14:15:48.0854949Z 2025-09-09T14:15:48.0854954Z 2025-09-09T14:15:48.0854963Z 2025-09-09T14:15:48.0855092Z def forward(self, x): 2025-09-09T14:15:48.0855480Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:48.0856096Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:15:48.0856710Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:15:48.0857210Z return activation_post_process_1 2025-09-09T14:15:48.0857499Z 2025-09-09T14:15:48.0857817Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:48.0858231Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:15:48.0858502Z [0., 0., 0.], 2025-09-09T14:15:48.0858772Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:15:48.0859107Z converted model pt2e: GraphModule( 2025-09-09T14:15:48.0859411Z (conv): Module() 2025-09-09T14:15:48.0859633Z (bn): Module() 2025-09-09T14:15:48.0859858Z ) 2025-09-09T14:15:48.0859996Z 2025-09-09T14:15:48.0860005Z 2025-09-09T14:15:48.0860009Z 2025-09-09T14:15:48.0860116Z def forward(self, x): 2025-09-09T14:15:48.0860427Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:48.0860808Z conv_bias = self.conv.bias 2025-09-09T14:15:48.0861143Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:15:48.0861964Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:15:48.0863401Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:48.0864599Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:15:48.0865135Z _scale_0 = self._scale_0 2025-09-09T14:15:48.0865412Z _zero_point_0 = self._zero_point_0 2025-09-09T14:15:48.0865764Z quantize_per_channel = self._frozen_param0 2025-09-09T14:15:48.0866780Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:15:48.0868346Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:15:48.0869332Z relu = torch.ops.aten.relu.default(conv1d_2); conv1d_2 = None 2025-09-09T14:15:48.0870224Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.005531368777155876, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:15:48.0871826Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.005531368777155876, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:48.0873018Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:15:48.0873479Z 2025-09-09T14:15:48.0873795Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:48.0874211Z onverted model fx: GraphModule( 2025-09-09T14:15:48.0874502Z (conv): ConvReLU1d( 2025-09-09T14:15:48.0874950Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:15:48.0875450Z (1): ReLU() 2025-09-09T14:15:48.0875670Z ) 2025-09-09T14:15:48.0875854Z ) 2025-09-09T14:15:48.0875959Z 2025-09-09T14:15:48.0875963Z 2025-09-09T14:15:48.0875967Z 2025-09-09T14:15:48.0876074Z def forward(self, x): 2025-09-09T14:15:48.0876766Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:15:48.0878215Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:15:48.0879390Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:15:48.0880366Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.005531368777155876, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:15:48.0881865Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.005531368777155876, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:15:48.0882909Z return dequantize_per_tensor_default_1 2025-09-09T14:15:48.0883211Z 2025-09-09T14:15:48.0883523Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:48.0883931Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:15:48.0884200Z [0., 0., 0.], 2025-09-09T14:15:48.0884426Z [0., 0., 0.]]]) 2025-09-09T14:15:48.0884685Z model pt2e: GraphModule( 2025-09-09T14:15:48.0884935Z (conv): Module() 2025-09-09T14:15:48.0885165Z (bn): Module() 2025-09-09T14:15:48.0885487Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:48.0886573Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:48.0887860Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:15:48.0888435Z ) 2025-09-09T14:15:48.0888747Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:48.0889829Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:15:48.0891114Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.3312976360321045, max_val=0.3013271391391754) 2025-09-09T14:15:48.0891710Z ) 2025-09-09T14:15:48.0892006Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:15:48.0893089Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0055]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:15:48.0894308Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.4099854230880737) 2025-09-09T14:15:48.0894846Z ) 2025-09-09T14:15:48.0895040Z ) 2025-09-09T14:15:48.0895143Z 2025-09-09T14:15:48.0895147Z 2025-09-09T14:15:48.0895151Z 2025-09-09T14:15:48.0895242Z def forward(self, x): 2025-09-09T14:15:48.0895631Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:15:48.0896006Z conv_weight = self.conv.weight 2025-09-09T14:15:48.0896320Z conv_bias = self.conv.bias 2025-09-09T14:15:48.0896599Z bn_weight = self.bn.weight 2025-09-09T14:15:48.0896889Z bn_bias = self.bn.bias 2025-09-09T14:15:48.0897184Z bn_running_mean = self.bn.running_mean 2025-09-09T14:15:48.0897512Z bn_running_var = self.bn.running_var 2025-09-09T14:15:48.0897890Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:15:48.0898379Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:15:48.0899115Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:15:48.0899707Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:15:48.0900148Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:15:48.0900602Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:15:48.0901104Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:15:48.0901671Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:15:48.0902303Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:15:48.0903008Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:15:48.0904126Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:15:48.0905155Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:15:48.0905767Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:15:48.0906413Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:15:48.0907050Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:15:48.0908035Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:15:48.0909008Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:15:48.0909609Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:15:48.0910585Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:15:48.0911041Z 2025-09-09T14:15:48.0911350Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:15:48.0911769Z model fx: GraphModule( 2025-09-09T14:16:01.4782809Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4784361Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:01.4786087Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:01.4786870Z ) 2025-09-09T14:16:01.4787122Z (conv): ConvBnReLU1d( 2025-09-09T14:16:01.4787465Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:16:01.4788049Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:16:01.4788748Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4790156Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:16:01.4792160Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.3312976360321045, max_val=0.3013271391391754) 2025-09-09T14:16:01.4792950Z ) 2025-09-09T14:16:01.4793183Z ) 2025-09-09T14:16:01.4793573Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4795089Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0055]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:01.4796745Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.4099854230880737) 2025-09-09T14:16:01.4797677Z ) 2025-09-09T14:16:01.4797915Z ) 2025-09-09T14:16:01.4798049Z 2025-09-09T14:16:01.4798070Z 2025-09-09T14:16:01.4798075Z 2025-09-09T14:16:01.4798195Z def forward(self, x): 2025-09-09T14:16:01.4798697Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:01.4799499Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:01.4800330Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:01.4800806Z return activation_post_process_1 2025-09-09T14:16:01.4801097Z 2025-09-09T14:16:01.4801393Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:01.4801807Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:01.4802057Z [0., 0., 0.], 2025-09-09T14:16:01.4802315Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:01.4802641Z converted model pt2e: GraphModule( 2025-09-09T14:16:01.4802940Z (conv): Module() 2025-09-09T14:16:01.4803152Z (bn): Module() 2025-09-09T14:16:01.4803363Z ) 2025-09-09T14:16:01.4803466Z 2025-09-09T14:16:01.4803470Z 2025-09-09T14:16:01.4803474Z 2025-09-09T14:16:01.4803578Z def forward(self, x): 2025-09-09T14:16:01.4803876Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:01.4804249Z conv_bias = self.conv.bias 2025-09-09T14:16:01.4804572Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:16:01.4805393Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:01.4806836Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:01.4808030Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:16:01.4808592Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:16:01.4809492Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.002608642913401127, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:16:01.4811111Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:16:01.4812080Z relu = torch.ops.aten.relu.default(conv1d_2); conv1d_2 = None 2025-09-09T14:16:01.4812954Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.00552935479208827, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:16:01.4814455Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.00552935479208827, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:16:01.4815636Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:16:01.4816093Z 2025-09-09T14:16:01.4816400Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:01.4816811Z onverted model fx: GraphModule( 2025-09-09T14:16:01.4817096Z (conv): ConvReLU1d( 2025-09-09T14:16:01.4818056Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:16:01.4818484Z (1): ReLU() 2025-09-09T14:16:01.4818684Z ) 2025-09-09T14:16:01.4818873Z ) 2025-09-09T14:16:01.4818975Z 2025-09-09T14:16:01.4818979Z 2025-09-09T14:16:01.4818983Z 2025-09-09T14:16:01.4819083Z def forward(self, x): 2025-09-09T14:16:01.4819776Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:01.4821204Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:01.4822452Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:16:01.4823441Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.00552935479208827, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:16:01.4824930Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.00552935479208827, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:01.4825950Z return dequantize_per_tensor_default_1 2025-09-09T14:16:01.4826260Z 2025-09-09T14:16:01.4826555Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:01.4826974Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:01.4827238Z [0., 0., 0.], 2025-09-09T14:16:01.4827455Z [0., 0., 0.]]]) 2025-09-09T14:16:01.4827895Z PASSED 2025-09-09T14:16:01.4828669Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_relu_fusion_cuda SKIPPED 2025-09-09T14:16:01.4829841Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_relu_fusion_no_conv_bias model pt2e: GraphModule( 2025-09-09T14:16:01.4830599Z (conv): Module() 2025-09-09T14:16:01.4830835Z (bn): Module() 2025-09-09T14:16:01.4831173Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4832242Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:01.4833513Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:01.4834094Z ) 2025-09-09T14:16:01.4834401Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4835627Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0022, 0.0026, 0.0023]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:01.4837127Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2639, -0.2941, -0.2608]), max_val=tensor([0.2795, 0.3227, 0.2891])) 2025-09-09T14:16:01.4837880Z ) 2025-09-09T14:16:01.4838172Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:01.4839254Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0039]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:01.4840479Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.9981018900871277) 2025-09-09T14:16:01.4841006Z ) 2025-09-09T14:16:01.4841198Z ) 2025-09-09T14:16:01.4841300Z 2025-09-09T14:16:01.4841304Z 2025-09-09T14:16:01.4841308Z 2025-09-09T14:16:01.4841399Z def forward(self, x): 2025-09-09T14:16:01.4841714Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:01.4842082Z conv_weight = self.conv.weight 2025-09-09T14:16:01.4842469Z bn_weight = self.bn.weight 2025-09-09T14:16:01.4842755Z bn_bias = self.bn.bias 2025-09-09T14:16:01.4843030Z bn_running_mean = self.bn.running_mean 2025-09-09T14:16:01.4843367Z bn_running_var = self.bn.running_var 2025-09-09T14:16:01.4843727Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:16:01.4844240Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:01.4844906Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:16:01.4845562Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:16:01.4846003Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:16:01.4846453Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:16:01.4846951Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:16:01.4847507Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:16:01.4848169Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:16:01.4849137Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:16:01.4850063Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:16:01.4850672Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:16:01.4851700Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:16:01.4852663Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:16:01.4853262Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:16:01.4853875Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:16:01.4854317Z 2025-09-09T14:16:11.8047616Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:11.8048259Z model fx: GraphModule( 2025-09-09T14:16:11.8048732Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8050171Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:11.8051928Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:11.8052702Z ) 2025-09-09T14:16:11.8052977Z (conv): ConvBnReLU1d( 2025-09-09T14:16:11.8053329Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:16:11.8053984Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:16:11.8054678Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8056141Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0022, 0.0026, 0.0023]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:11.8058160Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.2639, -0.2941, -0.2608]), max_val=tensor([0.2795, 0.3227, 0.2891])) 2025-09-09T14:16:11.8059168Z ) 2025-09-09T14:16:11.8059409Z ) 2025-09-09T14:16:11.8059810Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8061521Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0039]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:11.8063173Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.9981018900871277) 2025-09-09T14:16:11.8063875Z ) 2025-09-09T14:16:11.8064125Z ) 2025-09-09T14:16:11.8064256Z 2025-09-09T14:16:11.8064261Z 2025-09-09T14:16:11.8064266Z 2025-09-09T14:16:11.8064396Z def forward(self, x): 2025-09-09T14:16:11.8064886Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:11.8065681Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:11.8066622Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:11.8067263Z return activation_post_process_1 2025-09-09T14:16:11.8067628Z 2025-09-09T14:16:11.8068025Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:11.8068568Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:11.8068892Z [0., 0., 0.], 2025-09-09T14:16:11.8069228Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:11.8069658Z converted model pt2e: GraphModule( 2025-09-09T14:16:11.8069966Z (conv): Module() 2025-09-09T14:16:11.8070177Z (bn): Module() 2025-09-09T14:16:11.8070388Z ) 2025-09-09T14:16:11.8070490Z 2025-09-09T14:16:11.8070494Z 2025-09-09T14:16:11.8070498Z 2025-09-09T14:16:11.8070588Z def forward(self, x): 2025-09-09T14:16:11.8070897Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:11.8071317Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:16:11.8072120Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:11.8073552Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:11.8074816Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:16:11.8075355Z _scale_0 = self._scale_0 2025-09-09T14:16:11.8075647Z _zero_point_0 = self._zero_point_0 2025-09-09T14:16:11.8075979Z quantize_per_channel = self._frozen_param0 2025-09-09T14:16:11.8077007Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:16:11.8078019Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:16:11.8078989Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_weight_bias = None 2025-09-09T14:16:11.8080031Z relu = torch.ops.aten.relu.default(conv1d_2); conv1d_2 = None 2025-09-09T14:16:11.8080922Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.0039141252636909485, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:16:11.8082426Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0039141252636909485, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:11.8083587Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:16:11.8084056Z 2025-09-09T14:16:11.8084368Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:11.8084777Z onverted model fx: GraphModule( 2025-09-09T14:16:11.8085062Z (conv): ConvReLU1d( 2025-09-09T14:16:11.8085413Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:16:11.8085823Z (1): ReLU() 2025-09-09T14:16:11.8086021Z ) 2025-09-09T14:16:11.8086209Z ) 2025-09-09T14:16:11.8086310Z 2025-09-09T14:16:11.8086314Z 2025-09-09T14:16:11.8086317Z 2025-09-09T14:16:11.8086520Z def forward(self, x): 2025-09-09T14:16:11.8087210Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:11.8088631Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:11.8089783Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:16:11.8090844Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0039141252636909485, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:16:11.8092344Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0039141252636909485, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:11.8093369Z return dequantize_per_tensor_default_1 2025-09-09T14:16:11.8093679Z 2025-09-09T14:16:11.8093974Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:11.8094386Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:11.8094637Z [0., 0., 0.], 2025-09-09T14:16:11.8094874Z [0., 0., 0.]]]) 2025-09-09T14:16:11.8095128Z model pt2e: GraphModule( 2025-09-09T14:16:11.8095370Z (conv): Module() 2025-09-09T14:16:11.8095594Z (bn): Module() 2025-09-09T14:16:11.8095915Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8096994Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:11.8098263Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:11.8098855Z ) 2025-09-09T14:16:11.8099159Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8100234Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:16:11.8101518Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940981984138489, max_val=0.32268622517585754) 2025-09-09T14:16:11.8102109Z ) 2025-09-09T14:16:11.8102413Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:11.8103490Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0040]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:11.8104705Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.0074307918548584) 2025-09-09T14:16:11.8105246Z ) 2025-09-09T14:16:11.8105421Z ) 2025-09-09T14:16:11.8105534Z 2025-09-09T14:16:11.8105538Z 2025-09-09T14:16:11.8105542Z 2025-09-09T14:16:11.8105632Z def forward(self, x): 2025-09-09T14:16:11.8105946Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:11.8106313Z conv_weight = self.conv.weight 2025-09-09T14:16:11.8106621Z bn_weight = self.bn.weight 2025-09-09T14:16:11.8106892Z bn_bias = self.bn.bias 2025-09-09T14:16:11.8107183Z bn_running_mean = self.bn.running_mean 2025-09-09T14:16:11.8107508Z bn_running_var = self.bn.running_var 2025-09-09T14:16:11.8107882Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:16:11.8108367Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:11.8109032Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:16:11.8109705Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:16:11.8110314Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:16:11.8110783Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:16:11.8111264Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:16:11.8111833Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:16:11.8112462Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:16:11.8113545Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:16:11.8114487Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:16:11.8115151Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:16:11.8116181Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:16:11.8117147Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:16:11.8117727Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:16:11.8118339Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:16:11.8118767Z 2025-09-09T14:16:21.3017283Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:21.3017950Z model fx: GraphModule( 2025-09-09T14:16:21.3018438Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3019554Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:21.3020846Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:21.3021425Z ) 2025-09-09T14:16:21.3021633Z (conv): ConvBnReLU1d( 2025-09-09T14:16:21.3021908Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:16:21.3022390Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:16:21.3022907Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3023977Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:16:21.3025279Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940981984138489, max_val=0.32268622517585754) 2025-09-09T14:16:21.3025868Z ) 2025-09-09T14:16:21.3026061Z ) 2025-09-09T14:16:21.3026362Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3027454Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0040]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:21.3028723Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.0074307918548584) 2025-09-09T14:16:21.3029243Z ) 2025-09-09T14:16:21.3029443Z ) 2025-09-09T14:16:21.3029548Z 2025-09-09T14:16:21.3029552Z 2025-09-09T14:16:21.3029556Z 2025-09-09T14:16:21.3029658Z def forward(self, x): 2025-09-09T14:16:21.3030040Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:21.3030645Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:21.3031252Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:21.3032003Z return activation_post_process_1 2025-09-09T14:16:21.3032288Z 2025-09-09T14:16:21.3032596Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:21.3033011Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:21.3033262Z [0., 0., 0.], 2025-09-09T14:16:21.3033522Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:21.3033851Z converted model pt2e: GraphModule( 2025-09-09T14:16:21.3034149Z (conv): Module() 2025-09-09T14:16:21.3034360Z (bn): Module() 2025-09-09T14:16:21.3034808Z ) 2025-09-09T14:16:21.3034912Z 2025-09-09T14:16:21.3034916Z 2025-09-09T14:16:21.3034920Z 2025-09-09T14:16:21.3035010Z def forward(self, x): 2025-09-09T14:16:21.3035325Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:21.3035750Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:16:21.3036559Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:21.3038007Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:21.3039215Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:16:21.3039763Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:16:21.3040680Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0025408363435417414, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:16:21.3041587Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:16:21.3042547Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_weight_bias = None 2025-09-09T14:16:21.3043581Z relu = torch.ops.aten.relu.default(conv1d_2); conv1d_2 = None 2025-09-09T14:16:21.3044465Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.003950709011405706, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:16:21.3045965Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.003950709011405706, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:16:21.3047135Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:16:21.3047606Z 2025-09-09T14:16:21.3047923Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:21.3048339Z onverted model fx: GraphModule( 2025-09-09T14:16:21.3048627Z (conv): ConvReLU1d( 2025-09-09T14:16:21.3048981Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:16:21.3049402Z (1): ReLU() 2025-09-09T14:16:21.3049601Z ) 2025-09-09T14:16:21.3049788Z ) 2025-09-09T14:16:21.3049889Z 2025-09-09T14:16:21.3049893Z 2025-09-09T14:16:21.3049897Z 2025-09-09T14:16:21.3049991Z def forward(self, x): 2025-09-09T14:16:21.3050691Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:21.3052144Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:21.3053307Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:16:21.3054297Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.003950709011405706, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:16:21.3055906Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.003950709011405706, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:21.3056930Z return dequantize_per_tensor_default_1 2025-09-09T14:16:21.3057242Z 2025-09-09T14:16:21.3057539Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:21.3057956Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:21.3058209Z [0., 0., 0.], 2025-09-09T14:16:21.3058447Z [0., 0., 0.]]]) 2025-09-09T14:16:21.3058959Z PASSED 2025-09-09T14:16:21.3059614Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_no_bias model pt2e: GraphModule( 2025-09-09T14:16:21.3060313Z (conv): Module() 2025-09-09T14:16:21.3060637Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3061790Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021, 0.0023, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:21.3063289Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.2918, -0.2941]), max_val=tensor([0.2663, 0.2795, 0.3227])) 2025-09-09T14:16:21.3064042Z ) 2025-09-09T14:16:21.3064355Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3065413Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:21.3066687Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:21.3067266Z ) 2025-09-09T14:16:21.3067572Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3068658Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0006]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:21.3069871Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.16202233731746674) 2025-09-09T14:16:21.3070409Z ) 2025-09-09T14:16:21.3070585Z ) 2025-09-09T14:16:21.3070703Z 2025-09-09T14:16:21.3070707Z 2025-09-09T14:16:21.3070711Z 2025-09-09T14:16:21.3070805Z def forward(self, x): 2025-09-09T14:16:21.3071117Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:21.3071482Z conv_weight = self.conv.weight 2025-09-09T14:16:21.3072002Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:16:21.3072650Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:21.3073579Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:16:21.3074446Z relu = torch.ops.aten.relu.default(conv1d); conv1d = None 2025-09-09T14:16:21.3075065Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:16:21.3075680Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:16:21.3076115Z 2025-09-09T14:16:21.3076428Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:21.3076830Z model fx: GraphModule( 2025-09-09T14:16:21.3077186Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3078251Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:21.3079594Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:21.3080187Z ) 2025-09-09T14:16:21.3080376Z (conv): ConvReLU1d( 2025-09-09T14:16:21.3080653Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:16:21.3081040Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:21.3082140Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021, 0.0023, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:21.3083695Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.2918, -0.2941]), max_val=tensor([0.2663, 0.2795, 0.3227])) 2025-09-09T14:16:21.3084440Z ) 2025-09-09T14:16:22.3901264Z ) 2025-09-09T14:16:22.3901767Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3903301Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0006]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:22.3904993Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.16202233731746674) 2025-09-09T14:16:22.3905715Z ) 2025-09-09T14:16:22.3905956Z ) 2025-09-09T14:16:22.3906106Z 2025-09-09T14:16:22.3906111Z 2025-09-09T14:16:22.3906116Z 2025-09-09T14:16:22.3906249Z def forward(self, x): 2025-09-09T14:16:22.3906746Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:22.3907536Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:22.3908355Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:22.3908982Z return activation_post_process_1 2025-09-09T14:16:22.3909386Z 2025-09-09T14:16:22.3909781Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:22.3910462Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:22.3910790Z [0., 0., 0.], 2025-09-09T14:16:22.3911126Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:22.3911551Z converted model pt2e: GraphModule( 2025-09-09T14:16:22.3911931Z (conv): Module() 2025-09-09T14:16:22.3912214Z ) 2025-09-09T14:16:22.3912347Z 2025-09-09T14:16:22.3912352Z 2025-09-09T14:16:22.3912358Z 2025-09-09T14:16:22.3912473Z def forward(self, x): 2025-09-09T14:16:22.3912887Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:22.3913358Z _scale_0 = self._scale_0 2025-09-09T14:16:22.3913722Z _zero_point_0 = self._zero_point_0 2025-09-09T14:16:22.3914178Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:16:22.3915770Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:16:22.3917840Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:22.3919753Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:22.3921823Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel_default); dequantize_per_tensor_default = dequantize_per_channel_default = None 2025-09-09T14:16:22.3923078Z relu = torch.ops.aten.relu.default(conv1d); conv1d = None 2025-09-09T14:16:22.3924233Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.0006353817298077047, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:16:22.3926530Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0006353817298077047, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:22.3928113Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:16:22.3928719Z 2025-09-09T14:16:22.3929119Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:22.3929658Z onverted model fx: GraphModule( 2025-09-09T14:16:22.3930147Z (conv): ConvReLU1d( 2025-09-09T14:16:22.3930651Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,), bias=False) 2025-09-09T14:16:22.3931249Z (1): ReLU() 2025-09-09T14:16:22.3931515Z ) 2025-09-09T14:16:22.3931764Z ) 2025-09-09T14:16:22.3931900Z 2025-09-09T14:16:22.3931905Z 2025-09-09T14:16:22.3931909Z 2025-09-09T14:16:22.3932042Z def forward(self, x): 2025-09-09T14:16:22.3932960Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:22.3934875Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:22.3936423Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:16:22.3937746Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0006353817298077047, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:16:22.3939765Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0006353817298077047, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:22.3940905Z return dequantize_per_tensor_default_1 2025-09-09T14:16:22.3941212Z 2025-09-09T14:16:22.3941512Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:22.3941921Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:22.3942181Z [0., 0., 0.], 2025-09-09T14:16:22.3942399Z [0., 0., 0.]]]) 2025-09-09T14:16:22.3942651Z model pt2e: GraphModule( 2025-09-09T14:16:22.3942893Z (conv): Module() 2025-09-09T14:16:22.3943225Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3944300Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:16:22.3945597Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940996587276459, max_val=0.3226878345012665) 2025-09-09T14:16:22.3946184Z ) 2025-09-09T14:16:22.3946475Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3947555Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:22.3948812Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:22.3949399Z ) 2025-09-09T14:16:22.3949688Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3950766Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0006]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:22.3951996Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.1632835566997528) 2025-09-09T14:16:22.3952521Z ) 2025-09-09T14:16:22.3952709Z ) 2025-09-09T14:16:22.3952812Z 2025-09-09T14:16:22.3952816Z 2025-09-09T14:16:22.3952897Z 2025-09-09T14:16:22.3953001Z def forward(self, x): 2025-09-09T14:16:22.3953304Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:22.3953692Z conv_weight = self.conv.weight 2025-09-09T14:16:22.3954199Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:16:22.3954947Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:22.3955863Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:16:22.3956801Z relu = torch.ops.aten.relu.default(conv1d); conv1d = None 2025-09-09T14:16:22.3957347Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:16:22.3957951Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:16:22.3958391Z 2025-09-09T14:16:22.3958695Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:22.3959105Z model fx: GraphModule( 2025-09-09T14:16:22.3959447Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3960531Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:22.3961797Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:22.3962375Z ) 2025-09-09T14:16:22.3962579Z (conv): ConvReLU1d( 2025-09-09T14:16:22.3962843Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:16:22.3963239Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3964291Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:16:22.3965584Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.2940996587276459, max_val=0.3226878345012665) 2025-09-09T14:16:22.3966174Z ) 2025-09-09T14:16:22.3966352Z ) 2025-09-09T14:16:22.3966654Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:22.3967724Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0006]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:22.3968964Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=0.1632835566997528) 2025-09-09T14:16:22.3969498Z ) 2025-09-09T14:16:22.3969672Z ) 2025-09-09T14:16:22.3969773Z 2025-09-09T14:16:22.3969778Z 2025-09-09T14:16:22.3969794Z 2025-09-09T14:16:22.3969882Z def forward(self, x): 2025-09-09T14:16:22.3970264Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:22.3970868Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:22.3971489Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:22.3971959Z return activation_post_process_1 2025-09-09T14:16:22.3972248Z 2025-09-09T14:16:22.3972541Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:22.3972957Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:22.3973207Z [0., 0., 0.], 2025-09-09T14:16:22.3973469Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:22.3973794Z converted model pt2e: GraphModule( 2025-09-09T14:16:22.3974094Z (conv): Module() 2025-09-09T14:16:22.3974298Z ) 2025-09-09T14:16:22.3974414Z 2025-09-09T14:16:22.3974418Z 2025-09-09T14:16:22.3974422Z 2025-09-09T14:16:22.3974515Z def forward(self, x): 2025-09-09T14:16:22.3974921Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:22.3975329Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:16:22.3976379Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0025408491492271423, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:23.2905295Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:23.2907697Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:23.2909781Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_1, dequantize_per_tensor_default); dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:16:23.2911187Z relu = torch.ops.aten.relu.default(conv1d); conv1d = None 2025-09-09T14:16:23.2912357Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.0006403276929631829, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:16:23.2914363Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.0006403276929631829, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:16:23.2915987Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:16:23.2916594Z 2025-09-09T14:16:23.2916991Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:23.2917533Z onverted model fx: GraphModule( 2025-09-09T14:16:23.2917905Z (conv): ConvReLU1d( 2025-09-09T14:16:23.2918424Z (0): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,), bias=False) 2025-09-09T14:16:23.2919014Z (1): ReLU() 2025-09-09T14:16:23.2919287Z ) 2025-09-09T14:16:23.2919522Z ) 2025-09-09T14:16:23.2919653Z 2025-09-09T14:16:23.2919671Z 2025-09-09T14:16:23.2919676Z 2025-09-09T14:16:23.2919794Z def forward(self, x): 2025-09-09T14:16:23.2920709Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:23.2922635Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:23.2924204Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:16:23.2925512Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0006403276929631829, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:16:23.2927536Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0006403276929631829, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:23.2928919Z return dequantize_per_tensor_default_1 2025-09-09T14:16:23.2929306Z 2025-09-09T14:16:23.2929708Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:23.2930260Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:23.2930584Z [0., 0., 0.], 2025-09-09T14:16:23.2930890Z [0., 0., 0.]]]) 2025-09-09T14:16:23.2931204Z model pt2e: GraphModule( 2025-09-09T14:16:23.2931533Z (conv): Module() 2025-09-09T14:16:23.2931948Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2933625Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:23.2935740Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.1376, 0.2760, 0.3298])) 2025-09-09T14:16:23.2936748Z ) 2025-09-09T14:16:23.2937126Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2938570Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:23.2940341Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:23.2941134Z ) 2025-09-09T14:16:23.2941516Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2942961Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0079]), zero_point=tensor([34], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:23.2944249Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.284900426864624, max_val=0.7360976338386536) 2025-09-09T14:16:23.2944822Z ) 2025-09-09T14:16:23.2945012Z ) 2025-09-09T14:16:23.2945115Z 2025-09-09T14:16:23.2945119Z 2025-09-09T14:16:23.2945123Z 2025-09-09T14:16:23.2945214Z def forward(self, x): 2025-09-09T14:16:23.2945527Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:23.2945909Z conv_weight = self.conv.weight 2025-09-09T14:16:23.2946409Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:16:23.2947070Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:23.2947988Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:16:23.2948936Z activation_post_process_2 = self.activation_post_process_2(conv1d); conv1d = None 2025-09-09T14:16:23.2949565Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:16:23.2949990Z 2025-09-09T14:16:23.2950295Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:23.2950691Z model fx: GraphModule( 2025-09-09T14:16:23.2951043Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2952114Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:23.2953381Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:16:23.2953963Z ) 2025-09-09T14:16:23.2954149Z (conv): Conv1d( 2025-09-09T14:16:23.2954417Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:16:23.2954878Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2955991Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026, 0.0026, 0.0026]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:16:23.2957484Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3263, -0.3276, -0.3045]), max_val=tensor([0.1376, 0.2760, 0.3298])) 2025-09-09T14:16:23.2958226Z ) 2025-09-09T14:16:23.2958424Z ) 2025-09-09T14:16:23.2958717Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:16:23.2959888Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0079]), zero_point=tensor([34], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:16:23.2961146Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.284900426864624, max_val=0.7360976338386536) 2025-09-09T14:16:23.2961724Z ) 2025-09-09T14:16:23.2961912Z ) 2025-09-09T14:16:23.2962012Z 2025-09-09T14:16:23.2962017Z 2025-09-09T14:16:23.2962020Z 2025-09-09T14:16:23.2962111Z def forward(self, x): 2025-09-09T14:16:23.2962502Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:16:23.2963092Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:16:23.2963818Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:16:23.2964306Z return activation_post_process_1 2025-09-09T14:16:23.2964587Z 2025-09-09T14:16:23.2964901Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:23.2965305Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:16:23.2965570Z [0., 0., 0.], 2025-09-09T14:16:23.2965823Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:16:23.2966158Z converted model pt2e: GraphModule( 2025-09-09T14:16:23.2966437Z (conv): Module() 2025-09-09T14:16:23.2966654Z ) 2025-09-09T14:16:23.2966755Z 2025-09-09T14:16:23.2966759Z 2025-09-09T14:16:23.2966762Z 2025-09-09T14:16:23.2966864Z def forward(self, x): 2025-09-09T14:16:23.2967159Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:16:23.2967525Z _scale_0 = self._scale_0 2025-09-09T14:16:23.2967794Z _zero_point_0 = self._zero_point_0 2025-09-09T14:16:23.2968154Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:16:23.2969292Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:16:23.2970836Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:16:23.2972257Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:16:23.2973780Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel_default); dequantize_per_tensor_default = dequantize_per_channel_default = None 2025-09-09T14:16:23.2975150Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d, 0.007925482466816902, 34, -128, 127, torch.int8); conv1d = None 2025-09-09T14:16:23.2976652Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007925482466816902, 34, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:16:23.2977814Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:16:23.2978281Z 2025-09-09T14:16:23.2978577Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:16:23.2978996Z onverted model fx: GraphModule( 2025-09-09T14:16:23.2979447Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,), bias=False) 2025-09-09T14:16:23.2979895Z ) 2025-09-09T14:16:23.2979998Z 2025-09-09T14:16:23.2980002Z 2025-09-09T14:16:23.2980006Z 2025-09-09T14:16:23.2980107Z def forward(self, x): 2025-09-09T14:17:05.9851999Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:17:05.9853482Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:05.9854951Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:17:05.9855923Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.007925482466816902, 34, -128, 127, torch.int8); conv = None 2025-09-09T14:17:05.9864871Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007925482466816902, 34, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:05.9866207Z return dequantize_per_tensor_default_1 2025-09-09T14:17:05.9866525Z 2025-09-09T14:17:05.9866844Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:05.9867324Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:17:05.9867588Z [0., 0., 0.], 2025-09-09T14:17:05.9867838Z [0., 0., 0.]]]) 2025-09-09T14:17:05.9868189Z model pt2e: GraphModule( 2025-09-09T14:17:05.9868610Z (conv): Module() 2025-09-09T14:17:05.9869215Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9870568Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:17:05.9871862Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.327648401260376, max_val=0.32982930541038513) 2025-09-09T14:17:05.9872459Z ) 2025-09-09T14:17:05.9872754Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9873836Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:05.9875190Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:17:05.9875764Z ) 2025-09-09T14:17:05.9876076Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9877142Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0079]), zero_point=tensor([34], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:05.9878410Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.284900426864624, max_val=0.7398542761802673) 2025-09-09T14:17:05.9878994Z ) 2025-09-09T14:17:05.9879180Z ) 2025-09-09T14:17:05.9879297Z 2025-09-09T14:17:05.9879302Z 2025-09-09T14:17:05.9879305Z 2025-09-09T14:17:05.9879398Z def forward(self, x): 2025-09-09T14:17:05.9879702Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:05.9880082Z conv_weight = self.conv.weight 2025-09-09T14:17:05.9880600Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:17:05.9881264Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:05.9882198Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:17:05.9883141Z activation_post_process_2 = self.activation_post_process_2(conv1d); conv1d = None 2025-09-09T14:17:05.9883781Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:17:05.9884225Z 2025-09-09T14:17:05.9884536Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:05.9884952Z model fx: GraphModule( 2025-09-09T14:17:05.9885294Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9886381Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:05.9887767Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:17:05.9888358Z ) 2025-09-09T14:17:05.9888558Z (conv): Conv1d( 2025-09-09T14:17:05.9888810Z 3, 3, kernel_size=(3,), stride=(1,), bias=False 2025-09-09T14:17:05.9889213Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9890261Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0026]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:17:05.9891622Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.327648401260376, max_val=0.32982930541038513) 2025-09-09T14:17:05.9892212Z ) 2025-09-09T14:17:05.9892391Z ) 2025-09-09T14:17:05.9892696Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:05.9893762Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0079]), zero_point=tensor([34], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:05.9895037Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.284900426864624, max_val=0.7398542761802673) 2025-09-09T14:17:05.9895609Z ) 2025-09-09T14:17:05.9895798Z ) 2025-09-09T14:17:05.9895899Z 2025-09-09T14:17:05.9895903Z 2025-09-09T14:17:05.9895907Z 2025-09-09T14:17:05.9896010Z def forward(self, x): 2025-09-09T14:17:05.9896391Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:05.9896991Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:17:05.9897598Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:17:05.9898082Z return activation_post_process_1 2025-09-09T14:17:05.9898360Z 2025-09-09T14:17:05.9898670Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:05.9899082Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:17:05.9899330Z [0., 0., 0.], 2025-09-09T14:17:05.9899593Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:17:05.9899918Z converted model pt2e: GraphModule( 2025-09-09T14:17:05.9900216Z (conv): Module() 2025-09-09T14:17:05.9900425Z ) 2025-09-09T14:17:05.9900543Z 2025-09-09T14:17:05.9900547Z 2025-09-09T14:17:05.9900551Z 2025-09-09T14:17:05.9900640Z def forward(self, x): 2025-09-09T14:17:05.9900949Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:05.9901370Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:17:05.9902413Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0025970812421292067, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:05.9903849Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:17:05.9905289Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:05.9906847Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_1, dequantize_per_tensor_default); dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:17:05.9908204Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d, 0.007940215058624744, 34, -128, 127, torch.int8); conv1d = None 2025-09-09T14:17:05.9909704Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.007940215058624744, 34, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:05.9911378Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:17:05.9911845Z 2025-09-09T14:17:05.9912157Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:05.9912570Z onverted model fx: GraphModule( 2025-09-09T14:17:05.9913029Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,), bias=False) 2025-09-09T14:17:05.9913478Z ) 2025-09-09T14:17:05.9913596Z 2025-09-09T14:17:05.9913600Z 2025-09-09T14:17:05.9913604Z 2025-09-09T14:17:05.9913786Z def forward(self, x): 2025-09-09T14:17:05.9914491Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:17:05.9915976Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:05.9917149Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:17:05.9918133Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.007940215058624744, 34, -128, 127, torch.int8); conv = None 2025-09-09T14:17:05.9919609Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007940215058624744, 34, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:05.9920639Z return dequantize_per_tensor_default_1 2025-09-09T14:17:05.9920936Z 2025-09-09T14:17:05.9921253Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:05.9921657Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:17:05.9921924Z [0., 0., 0.], 2025-09-09T14:17:05.9922161Z [0., 0., 0.]]]) 2025-09-09T14:17:05.9922602Z PASSED 2025-09-09T14:17:05.9923361Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_transpose_bn PASSED 2025-09-09T14:17:05.9924538Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_transpose_bn_relu PASSED 2025-09-09T14:17:05.9925640Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_inplace_add_relu model pt2e: GraphModule( 2025-09-09T14:17:05.9926333Z (conv): Module() 2025-09-09T14:17:05.9926670Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2094440Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:17:06.2095960Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.5429]), max_val=tensor([-0.5429])) 2025-09-09T14:17:06.2096606Z ) 2025-09-09T14:17:06.2096938Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2098026Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2099303Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31662631034851074, max_val=-0.1489601731300354) 2025-09-09T14:17:06.2099904Z ) 2025-09-09T14:17:06.2100199Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2101298Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2102587Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.26761165261268616, max_val=0.3586132824420929) 2025-09-09T14:17:06.2103167Z ) 2025-09-09T14:17:06.2103758Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2104841Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0005]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2106125Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.04198697209358215, max_val=0.11820143461227417) 2025-09-09T14:17:06.2106829Z ) 2025-09-09T14:17:06.2107054Z ) 2025-09-09T14:17:06.2107165Z 2025-09-09T14:17:06.2107170Z 2025-09-09T14:17:06.2107186Z 2025-09-09T14:17:06.2107275Z def forward(self, x): 2025-09-09T14:17:06.2107577Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:06.2107957Z conv_weight = self.conv.weight 2025-09-09T14:17:06.2108459Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:17:06.2108998Z conv_bias = self.conv.bias 2025-09-09T14:17:06.2109417Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:06.2110507Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, conv_bias); activation_post_process_1 = conv_bias = None 2025-09-09T14:17:06.2111437Z activation_post_process_2 = self.activation_post_process_2(conv1d); conv1d = None 2025-09-09T14:17:06.2112346Z add_ = torch.ops.aten.add_.Tensor(activation_post_process_2, activation_post_process_0); activation_post_process_2 = activation_post_process_0 = None 2025-09-09T14:17:06.2113172Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:17:06.2113714Z activation_post_process_3 = self.activation_post_process_3(relu_); relu_ = None 2025-09-09T14:17:06.2114328Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:17:06.2114845Z 2025-09-09T14:17:06.2115154Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:06.2115569Z model fx: GraphModule( 2025-09-09T14:17:06.2115911Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2117003Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2118288Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31662631034851074, max_val=-0.1489601731300354) 2025-09-09T14:17:06.2118878Z ) 2025-09-09T14:17:06.2119071Z (conv): Conv1d( 2025-09-09T14:17:06.2119301Z 1, 1, kernel_size=(1,), stride=(1,) 2025-09-09T14:17:06.2119671Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2120747Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:17:06.2122108Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.5429]), max_val=tensor([-0.5429])) 2025-09-09T14:17:06.2122760Z ) 2025-09-09T14:17:06.2122940Z ) 2025-09-09T14:17:06.2123244Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2124319Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2125614Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.26761165261268616, max_val=0.3586132824420929) 2025-09-09T14:17:06.2126206Z ) 2025-09-09T14:17:06.2126403Z (relu): ReLU(inplace=True) 2025-09-09T14:17:06.2126778Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:06.2127996Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0005]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:06.2129295Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.04198697209358215, max_val=0.11820143461227417) 2025-09-09T14:17:06.2129898Z ) 2025-09-09T14:17:06.2130076Z ) 2025-09-09T14:17:06.2130188Z 2025-09-09T14:17:06.2130193Z 2025-09-09T14:17:06.2130211Z 2025-09-09T14:17:06.2130388Z def forward(self, x): 2025-09-09T14:17:06.2130769Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:06.2131260Z conv = self.conv(activation_post_process_0) 2025-09-09T14:17:06.2131743Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:17:06.2132544Z add = activation_post_process_1 + activation_post_process_0; activation_post_process_1 = activation_post_process_0 = None 2025-09-09T14:17:06.2133196Z relu = self.relu(add); add = None 2025-09-09T14:17:06.2133647Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:17:06.2134131Z return activation_post_process_2 2025-09-09T14:17:06.2134406Z 2025-09-09T14:17:06.2134715Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:06.2135165Z diff: tensor([[[0., 0., 0.]]], grad_fn=) 2025-09-09T14:17:06.2135534Z converted model pt2e: GraphModule( 2025-09-09T14:17:06.2135832Z (conv): Module() 2025-09-09T14:17:06.2136037Z ) 2025-09-09T14:17:06.2136137Z 2025-09-09T14:17:06.2136142Z 2025-09-09T14:17:06.2136146Z 2025-09-09T14:17:06.2136245Z def forward(self, x): 2025-09-09T14:17:06.2136544Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:06.2136911Z _scale_0 = self._scale_0 2025-09-09T14:17:06.2137181Z _zero_point_0 = self._zero_point_0 2025-09-09T14:17:06.2137543Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:17:06.2138692Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:17:06.2139778Z conv_bias = self.conv.bias 2025-09-09T14:17:06.2140522Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.0012416718527674675, 127, -128, 127, torch.int8); x = None 2025-09-09T14:17:06.2141835Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0012416718527674675, 127, -128, 127, torch.int8) 2025-09-09T14:17:06.2143385Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0012416718527674675, 127, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:06.2145050Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_3, dequantize_per_channel_default, conv_bias); dequantize_per_tensor_default_3 = dequantize_per_channel_default = conv_bias = None 2025-09-09T14:17:06.2146512Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d, 0.0014063265407457948, -128, -128, 127, torch.int8); conv1d = None 2025-09-09T14:17:06.2148039Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0014063265407457948, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:06.2149595Z add_ = torch.ops.aten.add_.Tensor(dequantize_per_tensor_default_1, dequantize_per_tensor_default_4); dequantize_per_tensor_default_1 = dequantize_per_tensor_default_4 = None 2025-09-09T14:17:06.2150488Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:17:06.2151445Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu_, 0.00046353504876606166, -128, -128, 127, torch.int8); relu_ = None 2025-09-09T14:17:06.2152959Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.00046353504876606166, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:06.2154124Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:17:06.2154594Z 2025-09-09T14:17:06.2155085Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:06.2155511Z onverted model fx: GraphModule( 2025-09-09T14:17:06.2155930Z (conv): QuantizedConv1d(Reference)(1, 1, kernel_size=(1,), stride=(1,)) 2025-09-09T14:17:06.2156362Z (relu): ReLU(inplace=True) 2025-09-09T14:17:06.2156627Z ) 2025-09-09T14:17:06.2156732Z 2025-09-09T14:17:06.2156736Z 2025-09-09T14:17:06.2156740Z 2025-09-09T14:17:06.2156829Z def forward(self, x): 2025-09-09T14:17:06.2157546Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.0012416718527674675, 127, -128, 127, torch.int8); x = None 2025-09-09T14:17:06.2159005Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0012416718527674675, 127, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:06.2160027Z conv = self.conv(dequantize_per_tensor_default) 2025-09-09T14:17:07.1953301Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0014063265407457948, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:17:07.1955015Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0014063265407457948, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:07.1956465Z add = dequantize_per_tensor_default_1 + dequantize_per_tensor_default; dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:17:07.1957194Z relu = self.relu(add); add = None 2025-09-09T14:17:07.1957992Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.00046353504876606166, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:17:07.1959504Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.00046353504876606166, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:07.1960549Z return dequantize_per_tensor_default_2 2025-09-09T14:17:07.1960847Z 2025-09-09T14:17:07.1961165Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:07.1961572Z diff: tensor([[[0., 0., 0.]]]) 2025-09-09T14:17:07.1961853Z model pt2e: GraphModule( 2025-09-09T14:17:07.1962110Z (conv): Module() 2025-09-09T14:17:07.1962441Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1963541Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:17:07.1964926Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.5428858995437622, max_val=-0.5428858995437622) 2025-09-09T14:17:07.1965507Z ) 2025-09-09T14:17:07.1965818Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1966886Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1968166Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31662631034851074, max_val=-0.1489601731300354) 2025-09-09T14:17:07.1969020Z ) 2025-09-09T14:17:07.1969331Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1970441Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1971720Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.26761165261268616, max_val=0.3586132824420929) 2025-09-09T14:17:07.1972415Z ) 2025-09-09T14:17:07.1972721Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1973790Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0005]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1975075Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.04198697209358215, max_val=0.11820143461227417) 2025-09-09T14:17:07.1975673Z ) 2025-09-09T14:17:07.1975854Z ) 2025-09-09T14:17:07.1975959Z 2025-09-09T14:17:07.1975964Z 2025-09-09T14:17:07.1975968Z 2025-09-09T14:17:07.1976073Z def forward(self, x): 2025-09-09T14:17:07.1976378Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:07.1976760Z conv_weight = self.conv.weight 2025-09-09T14:17:07.1977261Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:17:07.1977800Z conv_bias = self.conv.bias 2025-09-09T14:17:07.1978219Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:07.1979109Z conv1d = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, conv_bias); activation_post_process_1 = conv_bias = None 2025-09-09T14:17:07.1980032Z activation_post_process_2 = self.activation_post_process_2(conv1d); conv1d = None 2025-09-09T14:17:07.1980944Z add_ = torch.ops.aten.add_.Tensor(activation_post_process_2, activation_post_process_0); activation_post_process_2 = activation_post_process_0 = None 2025-09-09T14:17:07.1981757Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:17:07.1982295Z activation_post_process_3 = self.activation_post_process_3(relu_); relu_ = None 2025-09-09T14:17:07.1982906Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:17:07.1983350Z 2025-09-09T14:17:07.1983649Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:07.1984066Z model fx: GraphModule( 2025-09-09T14:17:07.1984405Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1985488Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1986770Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.31662631034851074, max_val=-0.1489601731300354) 2025-09-09T14:17:07.1987354Z ) 2025-09-09T14:17:07.1987550Z (conv): Conv1d( 2025-09-09T14:17:07.1987781Z 1, 1, kernel_size=(1,), stride=(1,) 2025-09-09T14:17:07.1988149Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1989197Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:17:07.1990506Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.5428858995437622, max_val=-0.5428858995437622) 2025-09-09T14:17:07.1991097Z ) 2025-09-09T14:17:07.1991274Z ) 2025-09-09T14:17:07.1991575Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1992717Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1994003Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.26761165261268616, max_val=0.3586132824420929) 2025-09-09T14:17:07.1994593Z ) 2025-09-09T14:17:07.1994886Z (relu): ReLU(inplace=True) 2025-09-09T14:17:07.1995264Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:07.1996416Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0005]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:07.1997701Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.04198697209358215, max_val=0.11820143461227417) 2025-09-09T14:17:07.1998293Z ) 2025-09-09T14:17:07.1998467Z ) 2025-09-09T14:17:07.1998568Z 2025-09-09T14:17:07.1998577Z 2025-09-09T14:17:07.1998581Z 2025-09-09T14:17:07.1998683Z def forward(self, x): 2025-09-09T14:17:07.1999057Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:07.1999541Z conv = self.conv(activation_post_process_0) 2025-09-09T14:17:07.2000020Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:17:07.2000810Z add = activation_post_process_1 + activation_post_process_0; activation_post_process_1 = activation_post_process_0 = None 2025-09-09T14:17:07.2001452Z relu = self.relu(add); add = None 2025-09-09T14:17:07.2001903Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:17:07.2002385Z return activation_post_process_2 2025-09-09T14:17:07.2002660Z 2025-09-09T14:17:07.2002963Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:07.2003418Z diff: tensor([[[0., 0., 0.]]], grad_fn=) 2025-09-09T14:17:07.2003799Z converted model pt2e: GraphModule( 2025-09-09T14:17:07.2004085Z (conv): Module() 2025-09-09T14:17:07.2004306Z ) 2025-09-09T14:17:07.2004409Z 2025-09-09T14:17:07.2004413Z 2025-09-09T14:17:07.2004417Z 2025-09-09T14:17:07.2004522Z def forward(self, x): 2025-09-09T14:17:07.2004821Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:07.2005238Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:17:07.2006260Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.004274691920727491, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:07.2007265Z conv_bias = self.conv.bias 2025-09-09T14:17:07.2008009Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.0012416718527674675, 127, -128, 127, torch.int8); x = None 2025-09-09T14:17:07.2009328Z dequantize_per_tensor_default_5 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0012416718527674675, 127, -128, 127, torch.int8) 2025-09-09T14:17:07.2011195Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0012416718527674675, 127, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:07.2012840Z conv1d = torch.ops.aten.conv1d.default(dequantize_per_tensor_default_4, dequantize_per_tensor_default, conv_bias); dequantize_per_tensor_default_4 = dequantize_per_tensor_default = conv_bias = None 2025-09-09T14:17:07.2014296Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d, 0.0014063265407457948, -128, -128, 127, torch.int8); conv1d = None 2025-09-09T14:17:07.2015947Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.0014063265407457948, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:07.2017491Z add_ = torch.ops.aten.add_.Tensor(dequantize_per_tensor_default_2, dequantize_per_tensor_default_5); dequantize_per_tensor_default_2 = dequantize_per_tensor_default_5 = None 2025-09-09T14:17:07.2018384Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:17:29.4735346Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu_, 0.00046353504876606166, -128, -128, 127, torch.int8); relu_ = None 2025-09-09T14:17:29.4737278Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.00046353504876606166, -128, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:17:29.4738465Z return pytree.tree_unflatten((dequantize_per_tensor_default_3,), self._out_spec) 2025-09-09T14:17:29.4738937Z 2025-09-09T14:17:29.4739235Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:29.4739679Z onverted model fx: GraphModule( 2025-09-09T14:17:29.4740098Z (conv): QuantizedConv1d(Reference)(1, 1, kernel_size=(1,), stride=(1,)) 2025-09-09T14:17:29.4740530Z (relu): ReLU(inplace=True) 2025-09-09T14:17:29.4740798Z ) 2025-09-09T14:17:29.4740903Z 2025-09-09T14:17:29.4740907Z 2025-09-09T14:17:29.4740939Z 2025-09-09T14:17:29.4741042Z def forward(self, x): 2025-09-09T14:17:29.4741746Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.0012416718527674675, 127, -128, 127, torch.int8); x = None 2025-09-09T14:17:29.4743212Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0012416718527674675, 127, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:29.4744242Z conv = self.conv(dequantize_per_tensor_default) 2025-09-09T14:17:29.4745094Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0014063265407457948, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:17:29.4746607Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0014063265407457948, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:29.4748014Z add = dequantize_per_tensor_default_1 + dequantize_per_tensor_default; dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:17:29.4748740Z relu = self.relu(add); add = None 2025-09-09T14:17:29.4749548Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.00046353504876606166, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:17:29.4751052Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.00046353504876606166, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:29.4752092Z return dequantize_per_tensor_default_2 2025-09-09T14:17:29.4752386Z 2025-09-09T14:17:29.4752695Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:29.4753111Z diff: tensor([[[0., 0., 0.]]]) 2025-09-09T14:17:29.4753560Z PASSED 2025-09-09T14:17:29.4754378Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_per_channel_weight_custom_dtype PASSED 2025-09-09T14:17:29.4755692Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_preserve_source_fn_stack PASSED 2025-09-09T14:17:29.4756819Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_update_shared_qspec model pt2e: GraphModule( 2025-09-09T14:17:29.4757522Z (conv): Module() 2025-09-09T14:17:29.4757759Z (bn): Module() 2025-09-09T14:17:29.4758094Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4759314Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:29.4760604Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:17:29.4761192Z ) 2025-09-09T14:17:29.4761483Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4762627Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0024, 0.0016, 0.0025]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:17:29.4764181Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3010, -0.2094, -0.2957]), max_val=tensor([0.2519, 0.1882, 0.3171])) 2025-09-09T14:17:29.4764926Z ) 2025-09-09T14:17:29.4765236Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4766294Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:29.4767552Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.379811406135559) 2025-09-09T14:17:29.4768118Z ) 2025-09-09T14:17:29.4768424Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4769496Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:29.4770741Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.379811406135559) 2025-09-09T14:17:29.4771315Z ) 2025-09-09T14:17:29.4771494Z ) 2025-09-09T14:17:29.4771608Z 2025-09-09T14:17:29.4771613Z 2025-09-09T14:17:29.4771617Z 2025-09-09T14:17:29.4771707Z def forward(self, x): 2025-09-09T14:17:29.4772010Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:29.4772395Z conv_weight = self.conv.weight 2025-09-09T14:17:29.4772703Z conv_bias = self.conv.bias 2025-09-09T14:17:29.4772973Z bn_weight = self.bn.weight 2025-09-09T14:17:29.4773258Z bn_bias = self.bn.bias 2025-09-09T14:17:29.4773532Z bn_running_mean = self.bn.running_mean 2025-09-09T14:17:29.4773866Z bn_running_var = self.bn.running_var 2025-09-09T14:17:29.4774224Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:17:29.4774718Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:29.4775369Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:17:29.4775968Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:17:29.4776403Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:17:29.4776850Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:17:29.4777337Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:17:29.4777884Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:17:29.4778520Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:17:29.4779218Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:17:29.4780328Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:17:29.4781336Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:17:29.4781994Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:17:29.4782642Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:17:29.4783269Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:17:29.4784254Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:17:29.4785390Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:17:29.4786225Z hardtanh = torch.ops.aten.hardtanh.default(activation_post_process_2, -1.0, 1.0); activation_post_process_2 = None 2025-09-09T14:17:29.4787037Z activation_post_process_3 = self.activation_post_process_3(hardtanh); hardtanh = None 2025-09-09T14:17:29.4787687Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:17:29.4788117Z 2025-09-09T14:17:29.4788427Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:29.4788827Z model fx: GraphModule( 2025-09-09T14:17:29.4789179Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4790253Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:29.4791532Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:17:29.4792119Z ) 2025-09-09T14:17:29.4792305Z (conv): ConvBn1d( 2025-09-09T14:17:29.4792551Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:17:29.4792997Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:17:29.4793538Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4794647Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0024, 0.0016, 0.0025]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:17:29.4796211Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.3010, -0.2094, -0.2957]), max_val=tensor([0.2519, 0.1882, 0.3171])) 2025-09-09T14:17:29.4796964Z ) 2025-09-09T14:17:29.4797146Z ) 2025-09-09T14:17:29.4797455Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:29.4798532Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:29.4799791Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.379811406135559) 2025-09-09T14:17:29.4800382Z ) 2025-09-09T14:17:39.9984353Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:17:39.9985069Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:39.9986174Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:39.9988073Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.379811406135559) 2025-09-09T14:17:39.9988681Z ) 2025-09-09T14:17:39.9988861Z ) 2025-09-09T14:17:39.9988981Z 2025-09-09T14:17:39.9988986Z 2025-09-09T14:17:39.9988990Z 2025-09-09T14:17:39.9989080Z def forward(self, x): 2025-09-09T14:17:39.9989469Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:39.9990340Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:17:39.9990961Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:17:39.9991607Z hardtanh = self.hardtanh(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:17:39.9992536Z activation_post_process_2 = self.activation_post_process_2(hardtanh); hardtanh = None 2025-09-09T14:17:39.9993047Z return activation_post_process_2 2025-09-09T14:17:39.9993344Z 2025-09-09T14:17:39.9993781Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:39.9994182Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:17:39.9994441Z [0., 0., 0.], 2025-09-09T14:17:39.9994749Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:17:39.9995088Z converted model pt2e: GraphModule( 2025-09-09T14:17:39.9995366Z (conv): Module() 2025-09-09T14:17:39.9995594Z (bn): Module() 2025-09-09T14:17:39.9995792Z ) 2025-09-09T14:17:39.9995917Z 2025-09-09T14:17:39.9995921Z 2025-09-09T14:17:39.9995925Z 2025-09-09T14:17:39.9996013Z def forward(self, x): 2025-09-09T14:17:39.9996322Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:39.9996682Z conv_bias = self.conv.bias 2025-09-09T14:17:39.9997015Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:17:39.9997820Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:17:39.9999520Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:40.0001505Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:17:40.0002160Z _scale_0 = self._scale_0 2025-09-09T14:17:40.0002462Z _zero_point_0 = self._zero_point_0 2025-09-09T14:17:40.0002802Z quantize_per_channel = self._frozen_param0 2025-09-09T14:17:40.0003809Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:17:40.0005374Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:17:40.0006751Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.010931476950645447, 1, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:17:40.0008250Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010931476950645447, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:40.0009598Z hardtanh = torch.ops.aten.hardtanh.default(dequantize_per_tensor_default_1, -1.0, 1.0); dequantize_per_tensor_default_1 = None 2025-09-09T14:17:40.0010960Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.010931476950645447, 1, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:17:40.0012473Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010931476950645447, 1, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:40.0013649Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:17:40.0014108Z 2025-09-09T14:17:40.0014418Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:40.0014834Z onverted model fx: GraphModule( 2025-09-09T14:17:40.0015255Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:17:40.0015890Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:17:40.0016209Z ) 2025-09-09T14:17:40.0016315Z 2025-09-09T14:17:40.0016319Z 2025-09-09T14:17:40.0016323Z 2025-09-09T14:17:40.0016429Z def forward(self, x): 2025-09-09T14:17:40.0017115Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:17:40.0018553Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:17:40.0019825Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:17:40.0020785Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.010931476950645447, 1, -128, 127, torch.int8); conv = None 2025-09-09T14:17:40.0022258Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010931476950645447, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:17:40.0023492Z hardtanh = self.hardtanh(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:17:40.0024546Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.010931476950645447, 1, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:17:40.0026054Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010931476950645447, 1, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:17:40.0027062Z return dequantize_per_tensor_default_2 2025-09-09T14:17:40.0027366Z 2025-09-09T14:17:40.0027673Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:17:40.0028072Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:17:40.0028334Z [0., 0., 0.], 2025-09-09T14:17:40.0028553Z [0., 0., 0.]]]) 2025-09-09T14:17:40.0028803Z model pt2e: GraphModule( 2025-09-09T14:17:40.0029046Z (conv): Module() 2025-09-09T14:17:40.0029267Z (bn): Module() 2025-09-09T14:17:40.0029581Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:40.0030659Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:40.0031942Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:17:40.0032514Z ) 2025-09-09T14:17:40.0032815Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:40.0033890Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:17:40.0035252Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.30097201466560364, max_val=0.3171221613883972) 2025-09-09T14:17:40.0035850Z ) 2025-09-09T14:17:40.0036140Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:40.0037207Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:40.0038456Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.3807092905044556) 2025-09-09T14:17:40.0039038Z ) 2025-09-09T14:17:40.0039332Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:17:40.0040473Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:17:40.0041735Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.3807092905044556) 2025-09-09T14:17:40.0042306Z ) 2025-09-09T14:17:40.0042497Z ) 2025-09-09T14:17:40.0042599Z 2025-09-09T14:17:40.0042604Z 2025-09-09T14:17:40.0042607Z 2025-09-09T14:17:40.0042712Z def forward(self, x): 2025-09-09T14:17:40.0043071Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:17:40.0043452Z conv_weight = self.conv.weight 2025-09-09T14:17:40.0043743Z conv_bias = self.conv.bias 2025-09-09T14:17:40.0044030Z bn_weight = self.bn.weight 2025-09-09T14:17:40.0044297Z bn_bias = self.bn.bias 2025-09-09T14:17:40.0044582Z bn_running_mean = self.bn.running_mean 2025-09-09T14:17:40.0044907Z bn_running_var = self.bn.running_var 2025-09-09T14:17:40.0045280Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:17:40.0045764Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:17:40.0046429Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:17:40.0047026Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:17:40.0047448Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:17:40.0047910Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:17:40.0048390Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1]) 2025-09-09T14:17:40.0048948Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:17:40.0049582Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:17:40.0050264Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:17:40.0051386Z conv1d_1 = torch.ops.aten.conv1d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:17:40.0052382Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1]); div = None 2025-09-09T14:18:12.1902155Z div_1 = torch.ops.aten.div.Tensor(conv1d_1, reshape_1); conv1d_1 = reshape_1 = None 2025-09-09T14:18:12.1902879Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1]); conv_bias = None 2025-09-09T14:18:12.1903744Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:18:12.1905138Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:18:12.1906675Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:18:12.1907580Z hardtanh = torch.ops.aten.hardtanh.default(activation_post_process_2, -1.0, 1.0); activation_post_process_2 = None 2025-09-09T14:18:12.1908381Z activation_post_process_3 = self.activation_post_process_3(hardtanh); hardtanh = None 2025-09-09T14:18:12.1909030Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:18:12.1909476Z 2025-09-09T14:18:12.1909775Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:12.1910379Z model fx: GraphModule( 2025-09-09T14:18:12.1910721Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:12.1911899Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0104]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:12.1913562Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3264806270599365, max_val=1.318617343902588) 2025-09-09T14:18:12.1914144Z ) 2025-09-09T14:18:12.1914348Z (conv): ConvBn1d( 2025-09-09T14:18:12.1914586Z 3, 3, kernel_size=(3,), stride=(1,) 2025-09-09T14:18:12.1915116Z (bn): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:18:12.1915628Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:12.1916839Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0025]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:18:12.1919048Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.30097201466560364, max_val=0.3171221613883972) 2025-09-09T14:18:12.1919935Z ) 2025-09-09T14:18:12.1920127Z ) 2025-09-09T14:18:12.1920417Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:12.1921507Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:12.1922783Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.3807092905044556) 2025-09-09T14:18:12.1923352Z ) 2025-09-09T14:18:12.1923593Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:18:12.1924019Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:12.1925105Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0109]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:12.1926383Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.407715082168579, max_val=1.3807092905044556) 2025-09-09T14:18:12.1926953Z ) 2025-09-09T14:18:12.1927148Z ) 2025-09-09T14:18:12.1927260Z 2025-09-09T14:18:12.1927265Z 2025-09-09T14:18:12.1927269Z 2025-09-09T14:18:12.1927360Z def forward(self, x): 2025-09-09T14:18:12.1927753Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:12.1928341Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:18:12.1928961Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:18:12.1929618Z hardtanh = self.hardtanh(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:18:12.1930304Z activation_post_process_2 = self.activation_post_process_2(hardtanh); hardtanh = None 2025-09-09T14:18:12.1930819Z return activation_post_process_2 2025-09-09T14:18:12.1931098Z 2025-09-09T14:18:12.1931404Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:12.1931803Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:18:12.1932064Z [0., 0., 0.], 2025-09-09T14:18:12.1932325Z [0., 0., 0.]]], grad_fn=) 2025-09-09T14:18:12.1932649Z converted model pt2e: GraphModule( 2025-09-09T14:18:12.1932941Z (conv): Module() 2025-09-09T14:18:12.1933152Z (bn): Module() 2025-09-09T14:18:12.1933361Z ) 2025-09-09T14:18:12.1933461Z 2025-09-09T14:18:12.1933465Z 2025-09-09T14:18:12.1933469Z 2025-09-09T14:18:12.1933557Z def forward(self, x): 2025-09-09T14:18:12.1933868Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:12.1934225Z conv_bias = self.conv.bias 2025-09-09T14:18:12.1934560Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:18:12.1935369Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:18:12.1936919Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:12.1938128Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:18:12.1938673Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:18:12.1939592Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0024970248341560364, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:18:12.1941044Z conv1d_2 = torch.ops.aten.conv1d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:18:12.1942489Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1d_2, 0.010934998281300068, 1, -128, 127, torch.int8); conv1d_2 = None 2025-09-09T14:18:12.1943999Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010934998281300068, 1, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:18:12.1945369Z hardtanh = torch.ops.aten.hardtanh.default(dequantize_per_tensor_default_2, -1.0, 1.0); dequantize_per_tensor_default_2 = None 2025-09-09T14:18:12.1946536Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.010934998281300068, 1, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:18:12.1948046Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.010934998281300068, 1, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:18:12.1949211Z return pytree.tree_unflatten((dequantize_per_tensor_default_3,), self._out_spec) 2025-09-09T14:18:12.1949687Z 2025-09-09T14:18:12.1950002Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:12.1950417Z onverted model fx: GraphModule( 2025-09-09T14:18:12.1950842Z (conv): QuantizedConv1d(Reference)(3, 3, kernel_size=(3,), stride=(1,)) 2025-09-09T14:18:12.1951308Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:18:12.1951638Z ) 2025-09-09T14:18:12.1951744Z 2025-09-09T14:18:12.1951749Z 2025-09-09T14:18:12.1951752Z 2025-09-09T14:18:12.1951848Z def forward(self, x): 2025-09-09T14:18:12.1952553Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.010372933000326157, 0, -128, 127, torch.int8); x = None 2025-09-09T14:18:12.1954000Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.010372933000326157, 0, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:12.1955245Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:18:12.1956231Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.010934998281300068, 1, -128, 127, torch.int8); conv = None 2025-09-09T14:18:12.1957698Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.010934998281300068, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:12.1958913Z hardtanh = self.hardtanh(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:18:12.1959978Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.010934998281300068, 1, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:18:12.1961490Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010934998281300068, 1, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:18:12.1962493Z return dequantize_per_tensor_default_2 2025-09-09T14:18:12.1962798Z 2025-09-09T14:18:12.1963156Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:12.1963571Z diff: tensor([[[0., 0., 0.], 2025-09-09T14:18:12.1963821Z [0., 0., 0.], 2025-09-09T14:18:12.1964053Z [0., 0., 0.]]]) 2025-09-09T14:18:12.1964503Z PASSED 2025-09-09T14:18:12.1965262Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_fold_bn_erases_bn_node PASSED 2025-09-09T14:18:12.1966467Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_bias_derived_qspec PASSED 2025-09-09T14:18:12.1967616Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_fusion model pt2e: GraphModule( 2025-09-09T14:18:12.1968309Z (conv): Module() 2025-09-09T14:18:12.1968521Z (bn): Module() 2025-09-09T14:18:12.1968849Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:12.1969922Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:23.4134264Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:18:23.4134984Z ) 2025-09-09T14:18:23.4135286Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:23.4136902Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:18:23.4138943Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1822, -0.1883, -0.1585]), max_val=tensor([0.1856, 0.1719, 0.1858])) 2025-09-09T14:18:23.4139682Z ) 2025-09-09T14:18:23.4139999Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:23.4141071Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0139]), zero_point=tensor([-11], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:23.4142338Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.628747820854187, max_val=1.9105255603790283) 2025-09-09T14:18:23.4143062Z ) 2025-09-09T14:18:23.4143301Z ) 2025-09-09T14:18:23.4143407Z 2025-09-09T14:18:23.4143430Z 2025-09-09T14:18:23.4143436Z 2025-09-09T14:18:23.4143567Z def forward(self, x): 2025-09-09T14:18:23.4143970Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:23.4144349Z conv_weight = self.conv.weight 2025-09-09T14:18:23.4144654Z conv_bias = self.conv.bias 2025-09-09T14:18:23.4144924Z bn_weight = self.bn.weight 2025-09-09T14:18:23.4145200Z bn_bias = self.bn.bias 2025-09-09T14:18:23.4145472Z bn_running_mean = self.bn.running_mean 2025-09-09T14:18:23.4145878Z bn_running_var = self.bn.running_var 2025-09-09T14:18:23.4146339Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:18:23.4146944Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:23.4147598Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:18:23.4148191Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:18:23.4148625Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:18:23.4149080Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:18:23.4149573Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:18:23.4150173Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:18:23.4150796Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:18:23.4151781Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:18:23.4152889Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:18:23.4153913Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:18:23.4154530Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:18:23.4155390Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:18:23.4156025Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:18:23.4157007Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:18:23.4158079Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:18:23.4158760Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:18:23.4159186Z 2025-09-09T14:18:23.4159493Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:23.4159895Z model fx: GraphModule( 2025-09-09T14:18:23.4160247Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:23.4161316Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:23.4162596Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:18:23.4163179Z ) 2025-09-09T14:18:23.4163369Z (conv): ConvBn2d( 2025-09-09T14:18:23.4163625Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:18:23.4164074Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:18:23.4164603Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:23.4165693Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:18:23.4167195Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1822, -0.1883, -0.1585]), max_val=tensor([0.1856, 0.1719, 0.1858])) 2025-09-09T14:18:23.4167943Z ) 2025-09-09T14:18:23.4168122Z ) 2025-09-09T14:18:23.4168430Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:23.4169501Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0139]), zero_point=tensor([-11], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:23.4170780Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.628747820854187, max_val=1.9105255603790283) 2025-09-09T14:18:23.4171364Z ) 2025-09-09T14:18:23.4171538Z ) 2025-09-09T14:18:23.4171651Z 2025-09-09T14:18:23.4171655Z 2025-09-09T14:18:23.4171659Z 2025-09-09T14:18:23.4171750Z def forward(self, x): 2025-09-09T14:18:23.4172129Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:23.4172736Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:18:23.4173363Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:18:23.4173837Z return activation_post_process_1 2025-09-09T14:18:23.4174128Z 2025-09-09T14:18:23.4174423Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:23.4174926Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:18:23.4175181Z [0., 0., 0.], 2025-09-09T14:18:23.4175416Z [0., 0., 0.]], 2025-09-09T14:18:23.4175568Z 2025-09-09T14:18:23.4175648Z [[0., 0., 0.], 2025-09-09T14:18:23.4175877Z [0., 0., 0.], 2025-09-09T14:18:23.4176104Z [0., 0., 0.]], 2025-09-09T14:18:23.4176252Z 2025-09-09T14:18:23.4176332Z [[0., 0., 0.], 2025-09-09T14:18:23.4176558Z [0., 0., 0.], 2025-09-09T14:18:23.4176805Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:18:23.4177275Z converted model pt2e: GraphModule( 2025-09-09T14:18:23.4177554Z (conv): Module() 2025-09-09T14:18:23.4177778Z (bn): Module() 2025-09-09T14:18:23.4177978Z ) 2025-09-09T14:18:23.4178094Z 2025-09-09T14:18:23.4178098Z 2025-09-09T14:18:23.4178102Z 2025-09-09T14:18:23.4178192Z def forward(self, x): 2025-09-09T14:18:23.4178508Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:23.4178871Z conv_bias = self.conv.bias 2025-09-09T14:18:23.4179610Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:18:23.4181047Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:23.4182031Z _scale_0 = self._scale_0 2025-09-09T14:18:23.4182319Z _zero_point_0 = self._zero_point_0 2025-09-09T14:18:23.4182647Z quantize_per_channel = self._frozen_param0 2025-09-09T14:18:23.4183663Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:18:23.4185236Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:18:23.4186618Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.0138795031234622, -11, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:18:23.4188107Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0138795031234622, -11, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:23.4189263Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:18:23.4189731Z 2025-09-09T14:18:23.4190027Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:23.4190451Z onverted model fx: GraphModule( 2025-09-09T14:18:23.4190884Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:18:23.4191440Z ) 2025-09-09T14:18:23.4191546Z 2025-09-09T14:18:23.4191550Z 2025-09-09T14:18:23.4191573Z 2025-09-09T14:18:23.4191666Z def forward(self, x): 2025-09-09T14:18:23.4192360Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:18:23.4193851Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:23.4195140Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:18:23.4196107Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0138795031234622, -11, -128, 127, torch.int8); conv = None 2025-09-09T14:18:23.4198389Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0138795031234622, -11, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:23.4199433Z return dequantize_per_tensor_default_1 2025-09-09T14:18:23.4199727Z 2025-09-09T14:18:23.4200038Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:23.4200438Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:18:23.4200705Z [0., 0., 0.], 2025-09-09T14:18:23.4200926Z [0., 0., 0.]], 2025-09-09T14:18:23.4201089Z 2025-09-09T14:18:23.4201170Z [[0., 0., 0.], 2025-09-09T14:18:23.4201385Z [0., 0., 0.], 2025-09-09T14:18:23.4201693Z [0., 0., 0.]], 2025-09-09T14:18:23.4201841Z 2025-09-09T14:18:23.4201934Z [[0., 0., 0.], 2025-09-09T14:18:23.4202148Z [0., 0., 0.], 2025-09-09T14:18:23.4202378Z [0., 0., 0.]]]]) 2025-09-09T14:18:23.4202623Z model pt2e: GraphModule( 2025-09-09T14:18:32.8990720Z (conv): Module() 2025-09-09T14:18:32.8991095Z (bn): Module() 2025-09-09T14:18:32.8991569Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.8993074Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:32.8994894Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:18:32.8995693Z ) 2025-09-09T14:18:32.8996108Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.8997610Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:18:32.8999399Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.1882954239845276, max_val=0.18581794202327728) 2025-09-09T14:18:32.9000213Z ) 2025-09-09T14:18:32.9000612Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.9002118Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0139]), zero_point=tensor([-11], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:32.9003871Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.6255242824554443, max_val=1.9112863540649414) 2025-09-09T14:18:32.9004672Z ) 2025-09-09T14:18:32.9004931Z ) 2025-09-09T14:18:32.9005077Z 2025-09-09T14:18:32.9005082Z 2025-09-09T14:18:32.9005087Z 2025-09-09T14:18:32.9005207Z def forward(self, x): 2025-09-09T14:18:32.9005625Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:32.9006134Z conv_weight = self.conv.weight 2025-09-09T14:18:32.9006528Z conv_bias = self.conv.bias 2025-09-09T14:18:32.9006909Z bn_weight = self.bn.weight 2025-09-09T14:18:32.9007269Z bn_bias = self.bn.bias 2025-09-09T14:18:32.9007662Z bn_running_mean = self.bn.running_mean 2025-09-09T14:18:32.9008098Z bn_running_var = self.bn.running_var 2025-09-09T14:18:32.9008601Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:18:32.9009261Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:32.9010336Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:18:32.9011152Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:18:32.9011729Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:18:32.9012346Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:18:32.9012995Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:18:32.9013767Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:18:32.9014966Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:18:32.9016052Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:18:32.9017650Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:18:32.9019108Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:18:32.9019960Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:18:32.9020993Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:18:32.9021878Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:18:32.9023267Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:18:32.9024780Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:18:32.9025738Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:18:32.9026351Z 2025-09-09T14:18:32.9026797Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:32.9027228Z model fx: GraphModule( 2025-09-09T14:18:32.9027610Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.9028767Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:32.9030106Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:18:32.9030728Z ) 2025-09-09T14:18:32.9030934Z (conv): ConvBn2d( 2025-09-09T14:18:32.9031203Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:18:32.9031687Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:18:32.9032249Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.9033366Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:18:32.9034814Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.1882954239845276, max_val=0.18581794202327728) 2025-09-09T14:18:32.9035450Z ) 2025-09-09T14:18:32.9035640Z ) 2025-09-09T14:18:32.9035966Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:32.9037110Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0139]), zero_point=tensor([-11], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:32.9038452Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.6255242824554443, max_val=1.9112863540649414) 2025-09-09T14:18:32.9039080Z ) 2025-09-09T14:18:32.9039265Z ) 2025-09-09T14:18:32.9039392Z 2025-09-09T14:18:32.9039397Z 2025-09-09T14:18:32.9039401Z 2025-09-09T14:18:32.9039495Z def forward(self, x): 2025-09-09T14:18:32.9039907Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:32.9040533Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:18:32.9041191Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:18:32.9041691Z return activation_post_process_1 2025-09-09T14:18:32.9041999Z 2025-09-09T14:18:32.9042310Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:32.9042854Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:18:32.9043129Z [0., 0., 0.], 2025-09-09T14:18:32.9043380Z [0., 0., 0.]], 2025-09-09T14:18:32.9043539Z 2025-09-09T14:18:32.9043638Z [[0., 0., 0.], 2025-09-09T14:18:32.9043872Z [0., 0., 0.], 2025-09-09T14:18:32.9044120Z [0., 0., 0.]], 2025-09-09T14:18:32.9044276Z 2025-09-09T14:18:32.9044360Z [[0., 0., 0.], 2025-09-09T14:18:32.9044602Z [0., 0., 0.], 2025-09-09T14:18:32.9044871Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:18:32.9045435Z converted model pt2e: GraphModule( 2025-09-09T14:18:32.9045723Z (conv): Module() 2025-09-09T14:18:32.9045955Z (bn): Module() 2025-09-09T14:18:32.9046165Z ) 2025-09-09T14:18:32.9046286Z 2025-09-09T14:18:32.9046291Z 2025-09-09T14:18:32.9046295Z 2025-09-09T14:18:32.9046389Z def forward(self, x): 2025-09-09T14:18:32.9046715Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:32.9047091Z conv_bias = self.conv.bias 2025-09-09T14:18:32.9047860Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:18:32.9049332Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:32.9050389Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:18:32.9051340Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0014826410915702581, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:18:32.9052836Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:18:32.9054279Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.013869845308363438, -11, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:18:32.9055842Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.013869845308363438, -11, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:18:32.9057198Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:18:32.9057668Z 2025-09-09T14:18:32.9057964Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:32.9058392Z onverted model fx: GraphModule( 2025-09-09T14:18:32.9058828Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:18:32.9059248Z ) 2025-09-09T14:18:32.9059351Z 2025-09-09T14:18:32.9059355Z 2025-09-09T14:18:32.9059359Z 2025-09-09T14:18:32.9059462Z def forward(self, x): 2025-09-09T14:18:32.9060153Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:18:32.9061592Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:32.9062757Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:18:32.9063726Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.013869845308363438, -11, -128, 127, torch.int8); conv = None 2025-09-09T14:18:32.9065215Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.013869845308363438, -11, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:32.9066240Z return dequantize_per_tensor_default_1 2025-09-09T14:18:32.9066535Z 2025-09-09T14:18:32.9066917Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:32.9067322Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:18:32.9067584Z [0., 0., 0.], 2025-09-09T14:18:32.9067808Z [0., 0., 0.]], 2025-09-09T14:18:32.9067971Z 2025-09-09T14:18:32.9068052Z [[0., 0., 0.], 2025-09-09T14:18:32.9068268Z [0., 0., 0.], 2025-09-09T14:18:32.9068495Z [0., 0., 0.]], 2025-09-09T14:18:32.9068642Z 2025-09-09T14:18:45.9398638Z [[0., 0., 0.], 2025-09-09T14:18:45.9399423Z [0., 0., 0.], 2025-09-09T14:18:45.9399737Z [0., 0., 0.]]]]) 2025-09-09T14:18:45.9400283Z PASSED 2025-09-09T14:18:45.9401309Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_fusion_cuda SKIPPED 2025-09-09T14:18:45.9402820Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_fusion_literal_args model pt2e: GraphModule( 2025-09-09T14:18:45.9403835Z (conv): Module() 2025-09-09T14:18:45.9404127Z (bn): Module() 2025-09-09T14:18:45.9404538Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9415584Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0147]), zero_point=tensor([-28], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:45.9417312Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4721859693527222, max_val=2.2869999408721924) 2025-09-09T14:18:45.9418134Z ) 2025-09-09T14:18:45.9418533Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9420033Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:18:45.9422049Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:18:45.9423030Z ) 2025-09-09T14:18:45.9423422Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9424855Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0313]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:45.9426536Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-4.046965599060059, max_val=3.922553539276123) 2025-09-09T14:18:45.9427313Z ) 2025-09-09T14:18:45.9427547Z ) 2025-09-09T14:18:45.9427699Z 2025-09-09T14:18:45.9427704Z 2025-09-09T14:18:45.9427709Z 2025-09-09T14:18:45.9427829Z def forward(self, x): 2025-09-09T14:18:45.9428233Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:45.9428754Z conv_weight = self.conv.weight 2025-09-09T14:18:45.9429148Z conv_bias = self.conv.bias 2025-09-09T14:18:45.9429517Z bn_weight = self.bn.weight 2025-09-09T14:18:45.9429868Z bn_bias = self.bn.bias 2025-09-09T14:18:45.9430238Z bn_running_mean = self.bn.running_mean 2025-09-09T14:18:45.9430662Z bn_running_var = self.bn.running_var 2025-09-09T14:18:45.9431148Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:18:45.9431788Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:45.9432677Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:18:45.9433447Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:18:45.9434017Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:18:45.9434690Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:18:45.9435295Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:18:45.9436121Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:18:45.9436751Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:18:45.9437452Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:18:45.9438596Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like, [2, 2], [4, 4]); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:18:45.9439725Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:18:45.9440340Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:18:45.9440989Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:18:45.9441630Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:18:45.9442632Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:18:45.9443689Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:18:45.9444376Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:18:45.9444808Z 2025-09-09T14:18:45.9445124Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:45.9445521Z model fx: GraphModule( 2025-09-09T14:18:45.9445877Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9446966Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0147]), zero_point=tensor([-28], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:45.9448241Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4721859693527222, max_val=2.2869999408721924) 2025-09-09T14:18:45.9448830Z ) 2025-09-09T14:18:45.9449017Z (conv): ConvBn2d( 2025-09-09T14:18:45.9449306Z 3, 3, kernel_size=(3, 3), stride=(2, 2), padding=(4, 4) 2025-09-09T14:18:45.9449811Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:18:45.9450322Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9451433Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:18:45.9452919Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:18:45.9453666Z ) 2025-09-09T14:18:45.9453860Z ) 2025-09-09T14:18:45.9454155Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:45.9455241Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0313]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:45.9456493Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-4.046965599060059, max_val=3.922553539276123) 2025-09-09T14:18:45.9457081Z ) 2025-09-09T14:18:45.9457258Z ) 2025-09-09T14:18:45.9457374Z 2025-09-09T14:18:45.9457379Z 2025-09-09T14:18:45.9457383Z 2025-09-09T14:18:45.9457472Z def forward(self, x): 2025-09-09T14:18:45.9457865Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:45.9458454Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:18:45.9459144Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:18:45.9459621Z return activation_post_process_1 2025-09-09T14:18:45.9459912Z 2025-09-09T14:18:45.9460207Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:45.9460629Z diff: tensor([[[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9460937Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9461202Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9461474Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9461800Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9462072Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:45.9462254Z 2025-09-09T14:18:45.9462339Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9462608Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9462866Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9463136Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9463408Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9463671Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:45.9463855Z 2025-09-09T14:18:45.9463955Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9464213Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9464492Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9464752Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9465026Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:45.9465332Z [0., 0., 0., 0., 0., 0.]]]], grad_fn=) 2025-09-09T14:18:45.9465692Z converted model pt2e: GraphModule( 2025-09-09T14:18:45.9465993Z (conv): Module() 2025-09-09T14:18:45.9466207Z (bn): Module() 2025-09-09T14:18:45.9466425Z ) 2025-09-09T14:18:45.9466530Z 2025-09-09T14:18:45.9466534Z 2025-09-09T14:18:45.9466538Z 2025-09-09T14:18:45.9466630Z def forward(self, x): 2025-09-09T14:18:45.9466945Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:45.9467308Z conv_bias = self.conv.bias 2025-09-09T14:18:45.9468054Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01474190503358841, -28, -128, 127, torch.int8); x = None 2025-09-09T14:18:45.9469493Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01474190503358841, -28, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:45.9470469Z _scale_0 = self._scale_0 2025-09-09T14:18:45.9470755Z _zero_point_0 = self._zero_point_0 2025-09-09T14:18:45.9471084Z quantize_per_channel = self._frozen_param0 2025-09-09T14:18:45.9472107Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:18:45.9473690Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias, [2, 2], [4, 4]); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:18:45.9475170Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.031253017485141754, 1, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:18:45.9476686Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.031253017485141754, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:45.9477852Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:18:45.9478315Z 2025-09-09T14:18:45.9478627Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:45.9479038Z onverted model fx: GraphModule( 2025-09-09T14:18:45.9479521Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(2, 2), padding=(4, 4)) 2025-09-09T14:18:45.9479990Z ) 2025-09-09T14:18:45.9480107Z 2025-09-09T14:18:45.9480112Z 2025-09-09T14:18:45.9480190Z 2025-09-09T14:18:45.9480282Z def forward(self, x): 2025-09-09T14:18:57.2269865Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01474190503358841, -28, -128, 127, torch.int8); x = None 2025-09-09T14:18:57.2271883Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01474190503358841, -28, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:57.2273440Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:18:57.2275160Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.031253017485141754, 1, -128, 127, torch.int8); conv = None 2025-09-09T14:18:57.2277141Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.031253017485141754, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:18:57.2278491Z return dequantize_per_tensor_default_1 2025-09-09T14:18:57.2278894Z 2025-09-09T14:18:57.2279284Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:57.2279834Z diff: tensor([[[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2280217Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2280576Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2280934Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2281275Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2281639Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:57.2281885Z 2025-09-09T14:18:57.2281996Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2282353Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2282698Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2283051Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2283390Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2283751Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:57.2283993Z 2025-09-09T14:18:57.2284122Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2284461Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2284814Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2285151Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2285503Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2285848Z [0., 0., 0., 0., 0., 0.]]]]) 2025-09-09T14:18:57.2286238Z model pt2e: GraphModule( 2025-09-09T14:18:57.2286565Z (conv): Module() 2025-09-09T14:18:57.2286857Z (bn): Module() 2025-09-09T14:18:57.2287280Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2288710Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0147]), zero_point=tensor([-28], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:57.2290453Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4721859693527222, max_val=2.2869999408721924) 2025-09-09T14:18:57.2291230Z ) 2025-09-09T14:18:57.2291626Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2293084Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:18:57.2294791Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127479195594788, max_val=0.1870359182357788) 2025-09-09T14:18:57.2295594Z ) 2025-09-09T14:18:57.2295977Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2297409Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0312]), zero_point=tensor([2], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:57.2299268Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-4.046965599060059, max_val=3.9041571617126465) 2025-09-09T14:18:57.2300040Z ) 2025-09-09T14:18:57.2300291Z ) 2025-09-09T14:18:57.2300424Z 2025-09-09T14:18:57.2300429Z 2025-09-09T14:18:57.2300434Z 2025-09-09T14:18:57.2300554Z def forward(self, x): 2025-09-09T14:18:57.2300960Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:57.2301443Z conv_weight = self.conv.weight 2025-09-09T14:18:57.2301835Z conv_bias = self.conv.bias 2025-09-09T14:18:57.2302269Z bn_weight = self.bn.weight 2025-09-09T14:18:57.2302618Z bn_bias = self.bn.bias 2025-09-09T14:18:57.2302984Z bn_running_mean = self.bn.running_mean 2025-09-09T14:18:57.2303404Z bn_running_var = self.bn.running_var 2025-09-09T14:18:57.2303878Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:18:57.2304512Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:57.2305387Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:18:57.2306156Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:18:57.2306701Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:18:57.2307158Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:18:57.2307644Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:18:57.2308213Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:18:57.2308842Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:18:57.2309541Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:18:57.2310864Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like, [2, 2], [4, 4]); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:18:57.2311897Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:18:57.2312514Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:18:57.2313163Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:18:57.2313801Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:18:57.2314860Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:18:57.2315922Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:18:57.2316605Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:18:57.2317033Z 2025-09-09T14:18:57.2317350Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:57.2317750Z model fx: GraphModule( 2025-09-09T14:18:57.2318103Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2319189Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0147]), zero_point=tensor([-28], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:57.2320464Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.4721859693527222, max_val=2.2869999408721924) 2025-09-09T14:18:57.2321062Z ) 2025-09-09T14:18:57.2321251Z (conv): ConvBn2d( 2025-09-09T14:18:57.2321542Z 3, 3, kernel_size=(3, 3), stride=(2, 2), padding=(4, 4) 2025-09-09T14:18:57.2322049Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:18:57.2322584Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2323776Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:18:57.2325079Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127479195594788, max_val=0.1870359182357788) 2025-09-09T14:18:57.2325680Z ) 2025-09-09T14:18:57.2325863Z ) 2025-09-09T14:18:57.2326169Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:18:57.2327336Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0312]), zero_point=tensor([2], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:18:57.2328606Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-4.046965599060059, max_val=3.9041571617126465) 2025-09-09T14:18:57.2329178Z ) 2025-09-09T14:18:57.2329376Z ) 2025-09-09T14:18:57.2329482Z 2025-09-09T14:18:57.2329486Z 2025-09-09T14:18:57.2329490Z 2025-09-09T14:18:57.2329594Z def forward(self, x): 2025-09-09T14:18:57.2329977Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:18:57.2330587Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:18:57.2331197Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:18:57.2331686Z return activation_post_process_1 2025-09-09T14:18:57.2331970Z 2025-09-09T14:18:57.2332278Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:18:57.2332699Z diff: tensor([[[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2332992Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2333267Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2333525Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2333799Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2334061Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:57.2334256Z 2025-09-09T14:18:57.2334342Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2334598Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2334866Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2335135Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2335389Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2335656Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:18:57.2335837Z 2025-09-09T14:18:57.2335922Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2336195Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2336451Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2336722Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2336978Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:18:57.2337292Z [0., 0., 0., 0., 0., 0.]]]], grad_fn=) 2025-09-09T14:18:57.2337659Z converted model pt2e: GraphModule( 2025-09-09T14:18:57.2337941Z (conv): Module() 2025-09-09T14:18:57.2338173Z (bn): Module() 2025-09-09T14:18:57.2338373Z ) 2025-09-09T14:18:57.2338475Z 2025-09-09T14:18:57.2338479Z 2025-09-09T14:18:57.2338495Z 2025-09-09T14:18:57.2338585Z def forward(self, x): 2025-09-09T14:18:57.2338894Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:18:57.2339276Z conv_bias = self.conv.bias 2025-09-09T14:18:57.2340012Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01474190503358841, -28, -128, 127, torch.int8); x = None 2025-09-09T14:18:57.2341458Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01474190503358841, -28, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:18:57.2342515Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:19:03.3301753Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0015061007579788566, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:19:03.3303308Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias, [2, 2], [4, 4]); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:19:03.3304757Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.03118087351322174, 2, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:19:03.3306255Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.03118087351322174, 2, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:19:03.3307534Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:19:03.3308009Z 2025-09-09T14:19:03.3308306Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:03.3308734Z onverted model fx: GraphModule( 2025-09-09T14:19:03.3309215Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(2, 2), padding=(4, 4)) 2025-09-09T14:19:03.3309696Z ) 2025-09-09T14:19:03.3309799Z 2025-09-09T14:19:03.3309804Z 2025-09-09T14:19:03.3309820Z 2025-09-09T14:19:03.3310071Z def forward(self, x): 2025-09-09T14:19:03.3310768Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.01474190503358841, -28, -128, 127, torch.int8); x = None 2025-09-09T14:19:03.3312201Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.01474190503358841, -28, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:03.3313368Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:19:03.3314328Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.03118087351322174, 2, -128, 127, torch.int8); conv = None 2025-09-09T14:19:03.3315871Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.03118087351322174, 2, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:03.3316884Z return dequantize_per_tensor_default_1 2025-09-09T14:19:03.3317182Z 2025-09-09T14:19:03.3317493Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:03.3317901Z diff: tensor([[[[0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3318208Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3318469Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3318737Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3318995Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3319267Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:19:03.3319448Z 2025-09-09T14:19:03.3319544Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3319805Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3320074Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3320332Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3320600Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3320859Z [0., 0., 0., 0., 0., 0.]], 2025-09-09T14:19:03.3321055Z 2025-09-09T14:19:03.3321139Z [[0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3321398Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3321668Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3321937Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3322199Z [0., 0., 0., 0., 0., 0.], 2025-09-09T14:19:03.3322472Z [0., 0., 0., 0., 0., 0.]]]]) 2025-09-09T14:19:03.3322945Z PASSED 2025-09-09T14:19:03.3323658Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_fusion_no_conv_bias model pt2e: GraphModule( 2025-09-09T14:19:03.3324394Z (conv): Module() 2025-09-09T14:19:03.3324626Z (bn): Module() 2025-09-09T14:19:03.3325080Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3326170Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:03.3327457Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:03.3328033Z ) 2025-09-09T14:19:03.3329109Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3330254Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0014]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:03.3331739Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.1912, -0.1684]), max_val=tensor([0.1914, 0.1792, 0.1824])) 2025-09-09T14:19:03.3332496Z ) 2025-09-09T14:19:03.3332790Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3333868Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0200]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:03.3335152Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.5782737731933594, max_val=2.5220179557800293) 2025-09-09T14:19:03.3335738Z ) 2025-09-09T14:19:03.3335927Z ) 2025-09-09T14:19:03.3336028Z 2025-09-09T14:19:03.3336032Z 2025-09-09T14:19:03.3336036Z 2025-09-09T14:19:03.3336125Z def forward(self, x): 2025-09-09T14:19:03.3336440Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:03.3336820Z conv_weight = self.conv.weight 2025-09-09T14:19:03.3337110Z bn_weight = self.bn.weight 2025-09-09T14:19:03.3337396Z bn_bias = self.bn.bias 2025-09-09T14:19:03.3337666Z bn_running_mean = self.bn.running_mean 2025-09-09T14:19:03.3338002Z bn_running_var = self.bn.running_var 2025-09-09T14:19:03.3338354Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:19:03.3338849Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:03.3339508Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:19:03.3340117Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:19:03.3340552Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:19:03.3340998Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:19:03.3341499Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:19:03.3342054Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:19:03.3342694Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:19:03.3343644Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:19:03.3344590Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:19:03.3345202Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:19:03.3346211Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:19:03.3347285Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:19:03.3347964Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:19:03.3348392Z 2025-09-09T14:19:03.3348773Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:03.3349176Z model fx: GraphModule( 2025-09-09T14:19:03.3349538Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3350619Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:03.3351899Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:03.3352546Z ) 2025-09-09T14:19:03.3352732Z (conv): ConvBn2d( 2025-09-09T14:19:03.3353008Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:19:03.3353482Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:03.3354002Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3355196Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0014]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:03.3356699Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.1912, -0.1684]), max_val=tensor([0.1914, 0.1792, 0.1824])) 2025-09-09T14:19:03.3357449Z ) 2025-09-09T14:19:03.3357629Z ) 2025-09-09T14:19:03.3357936Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:03.3358999Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0200]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:03.3360272Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.5782737731933594, max_val=2.5220179557800293) 2025-09-09T14:19:03.3360862Z ) 2025-09-09T14:19:03.3361036Z ) 2025-09-09T14:19:03.3361138Z 2025-09-09T14:19:03.3361153Z 2025-09-09T14:19:03.3361157Z 2025-09-09T14:19:03.3361245Z def forward(self, x): 2025-09-09T14:19:03.3361622Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:03.3362220Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:19:03.3362837Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:19:03.3363311Z return activation_post_process_1 2025-09-09T14:19:03.3363596Z 2025-09-09T14:19:03.3363888Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:03.3364303Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:19:03.3364552Z [0., 0., 0.], 2025-09-09T14:19:03.3364786Z [0., 0., 0.]], 2025-09-09T14:19:03.3364933Z 2025-09-09T14:19:03.3365013Z [[0., 0., 0.], 2025-09-09T14:19:03.3365243Z [0., 0., 0.], 2025-09-09T14:19:03.3365473Z [0., 0., 0.]], 2025-09-09T14:19:03.3365620Z 2025-09-09T14:19:03.3365701Z [[0., 0., 0.], 2025-09-09T14:19:03.3365930Z [0., 0., 0.], 2025-09-09T14:19:03.3366151Z [0., 0., 0.]]], 2025-09-09T14:19:03.3366302Z 2025-09-09T14:19:03.3366319Z 2025-09-09T14:19:03.3366398Z [[[0., 0., 0.], 2025-09-09T14:19:03.3366616Z [0., 0., 0.], 2025-09-09T14:19:03.3366844Z [0., 0., 0.]], 2025-09-09T14:19:03.3366993Z 2025-09-09T14:19:03.3367080Z [[0., 0., 0.], 2025-09-09T14:19:03.3367306Z [0., 0., 0.], 2025-09-09T14:19:14.5575900Z [0., 0., 0.]], 2025-09-09T14:19:14.5576201Z 2025-09-09T14:19:14.5576332Z [[0., 0., 0.], 2025-09-09T14:19:14.5576694Z [0., 0., 0.], 2025-09-09T14:19:14.5576985Z [0., 0., 0.]]], 2025-09-09T14:19:14.5577200Z 2025-09-09T14:19:14.5577205Z 2025-09-09T14:19:14.5577310Z [[[0., 0., 0.], 2025-09-09T14:19:14.5577597Z [0., 0., 0.], 2025-09-09T14:19:14.5578225Z [0., 0., 0.]], 2025-09-09T14:19:14.5578424Z 2025-09-09T14:19:14.5578545Z [[0., 0., 0.], 2025-09-09T14:19:14.5578834Z [0., 0., 0.], 2025-09-09T14:19:14.5579141Z [0., 0., 0.]], 2025-09-09T14:19:14.5579338Z 2025-09-09T14:19:14.5579442Z [[0., 0., 0.], 2025-09-09T14:19:14.5579743Z [0., 0., 0.], 2025-09-09T14:19:14.5580068Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:19:14.5580526Z converted model pt2e: GraphModule( 2025-09-09T14:19:14.5581007Z (conv): Module() 2025-09-09T14:19:14.5581295Z (bn): Module() 2025-09-09T14:19:14.5581574Z ) 2025-09-09T14:19:14.5581705Z 2025-09-09T14:19:14.5581710Z 2025-09-09T14:19:14.5581715Z 2025-09-09T14:19:14.5581850Z def forward(self, x): 2025-09-09T14:19:14.5582240Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:14.5583361Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:14.5584961Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:14.5585944Z _scale_0 = self._scale_0 2025-09-09T14:19:14.5586242Z _zero_point_0 = self._zero_point_0 2025-09-09T14:19:14.5586571Z quantize_per_channel = self._frozen_param0 2025-09-09T14:19:14.5587595Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:19:14.5588624Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:19:14.5589595Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_weight_bias = None 2025-09-09T14:19:14.5591056Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.020001143217086792, 1, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:19:14.5592570Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.020001143217086792, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:14.5593730Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:19:14.5594207Z 2025-09-09T14:19:14.5594506Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:14.5595006Z onverted model fx: GraphModule( 2025-09-09T14:19:14.5595426Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:14.5595863Z ) 2025-09-09T14:19:14.5595968Z 2025-09-09T14:19:14.5595972Z 2025-09-09T14:19:14.5595976Z 2025-09-09T14:19:14.5596085Z def forward(self, x): 2025-09-09T14:19:14.5596780Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:14.5598226Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:14.5599390Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:19:14.5600362Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.020001143217086792, 1, -128, 127, torch.int8); conv = None 2025-09-09T14:19:14.5601830Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.020001143217086792, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:14.5602917Z return dequantize_per_tensor_default_1 2025-09-09T14:19:14.5603229Z 2025-09-09T14:19:14.5603536Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:14.5603938Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:19:14.5604199Z [0., 0., 0.], 2025-09-09T14:19:14.5604421Z [0., 0., 0.]], 2025-09-09T14:19:14.5604571Z 2025-09-09T14:19:14.5604662Z [[0., 0., 0.], 2025-09-09T14:19:14.5604883Z [0., 0., 0.], 2025-09-09T14:19:14.5605111Z [0., 0., 0.]], 2025-09-09T14:19:14.5605331Z 2025-09-09T14:19:14.5605414Z [[0., 0., 0.], 2025-09-09T14:19:14.5605643Z [0., 0., 0.], 2025-09-09T14:19:14.5605875Z [0., 0., 0.]]], 2025-09-09T14:19:14.5606027Z 2025-09-09T14:19:14.5606031Z 2025-09-09T14:19:14.5606111Z [[[0., 0., 0.], 2025-09-09T14:19:14.5606342Z [0., 0., 0.], 2025-09-09T14:19:14.5606557Z [0., 0., 0.]], 2025-09-09T14:19:14.5606720Z 2025-09-09T14:19:14.5606802Z [[0., 0., 0.], 2025-09-09T14:19:14.5607017Z [0., 0., 0.], 2025-09-09T14:19:14.5607250Z [0., 0., 0.]], 2025-09-09T14:19:14.5607395Z 2025-09-09T14:19:14.5607474Z [[0., 0., 0.], 2025-09-09T14:19:14.5607700Z [0., 0., 0.], 2025-09-09T14:19:14.5607929Z [0., 0., 0.]]], 2025-09-09T14:19:14.5608080Z 2025-09-09T14:19:14.5608084Z 2025-09-09T14:19:14.5608163Z [[[0., 0., 0.], 2025-09-09T14:19:14.5608387Z [0., 0., 0.], 2025-09-09T14:19:14.5608602Z [0., 0., 0.]], 2025-09-09T14:19:14.5608765Z 2025-09-09T14:19:14.5608845Z [[0., 0., 0.], 2025-09-09T14:19:14.5609059Z [0., 0., 0.], 2025-09-09T14:19:14.5609286Z [0., 0., 0.]], 2025-09-09T14:19:14.5609433Z 2025-09-09T14:19:14.5609514Z [[0., 0., 0.], 2025-09-09T14:19:14.5609740Z [0., 0., 0.], 2025-09-09T14:19:14.5610182Z [0., 0., 0.]]]]) 2025-09-09T14:19:14.5610448Z model pt2e: GraphModule( 2025-09-09T14:19:14.5610708Z (conv): Module() 2025-09-09T14:19:14.5610928Z (bn): Module() 2025-09-09T14:19:14.5611260Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5612338Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:14.5613730Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:14.5614327Z ) 2025-09-09T14:19:14.5614623Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5615716Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:14.5617008Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124282896518707, max_val=0.19141820073127747) 2025-09-09T14:19:14.5617594Z ) 2025-09-09T14:19:14.5617897Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5618951Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0200]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:14.5620213Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.577202320098877, max_val=2.521923303604126) 2025-09-09T14:19:14.5620800Z ) 2025-09-09T14:19:14.5620975Z ) 2025-09-09T14:19:14.5621076Z 2025-09-09T14:19:14.5621080Z 2025-09-09T14:19:14.5621084Z 2025-09-09T14:19:14.5621184Z def forward(self, x): 2025-09-09T14:19:14.5621483Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:14.5621859Z conv_weight = self.conv.weight 2025-09-09T14:19:14.5622149Z bn_weight = self.bn.weight 2025-09-09T14:19:14.5622638Z bn_bias = self.bn.bias 2025-09-09T14:19:14.5622910Z bn_running_mean = self.bn.running_mean 2025-09-09T14:19:14.5623241Z bn_running_var = self.bn.running_var 2025-09-09T14:19:14.5623609Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:19:14.5624091Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:14.5624756Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:19:14.5625427Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:19:14.5625858Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:19:14.5626301Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:19:14.5626797Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:19:14.5627369Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:19:14.5627998Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:19:14.5628966Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:19:14.5629909Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:19:14.5630527Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:19:14.5631549Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:19:14.5632615Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:19:14.5633298Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:19:14.5633725Z 2025-09-09T14:19:14.5634036Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:14.5634444Z model fx: GraphModule( 2025-09-09T14:19:14.5634864Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5635951Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:14.5637213Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:14.5637808Z ) 2025-09-09T14:19:14.5638000Z (conv): ConvBn2d( 2025-09-09T14:19:14.5638280Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:19:14.5638769Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:14.5639282Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5640352Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:14.5641640Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124282896518707, max_val=0.19141820073127747) 2025-09-09T14:19:14.5642243Z ) 2025-09-09T14:19:14.5642435Z ) 2025-09-09T14:19:14.5642730Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:14.5643815Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0200]), zero_point=tensor([1], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:24.3865567Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.577202320098877, max_val=2.521923303604126) 2025-09-09T14:19:24.3866394Z ) 2025-09-09T14:19:24.3866941Z ) 2025-09-09T14:19:24.3867077Z 2025-09-09T14:19:24.3867083Z 2025-09-09T14:19:24.3867087Z 2025-09-09T14:19:24.3867223Z def forward(self, x): 2025-09-09T14:19:24.3867729Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:24.3868532Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:19:24.3869340Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:19:24.3869857Z return activation_post_process_1 2025-09-09T14:19:24.3870267Z 2025-09-09T14:19:24.3870565Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:24.3870982Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:19:24.3871234Z [0., 0., 0.], 2025-09-09T14:19:24.3871478Z [0., 0., 0.]], 2025-09-09T14:19:24.3871626Z 2025-09-09T14:19:24.3871705Z [[0., 0., 0.], 2025-09-09T14:19:24.3871931Z [0., 0., 0.], 2025-09-09T14:19:24.3872152Z [0., 0., 0.]], 2025-09-09T14:19:24.3872314Z 2025-09-09T14:19:24.3872393Z [[0., 0., 0.], 2025-09-09T14:19:24.3872607Z [0., 0., 0.], 2025-09-09T14:19:24.3872835Z [0., 0., 0.]]], 2025-09-09T14:19:24.3872986Z 2025-09-09T14:19:24.3872991Z 2025-09-09T14:19:24.3873084Z [[[0., 0., 0.], 2025-09-09T14:19:24.3873298Z [0., 0., 0.], 2025-09-09T14:19:24.3873528Z [0., 0., 0.]], 2025-09-09T14:19:24.3873677Z 2025-09-09T14:19:24.3873756Z [[0., 0., 0.], 2025-09-09T14:19:24.3873981Z [0., 0., 0.], 2025-09-09T14:19:24.3874203Z [0., 0., 0.]], 2025-09-09T14:19:24.3874361Z 2025-09-09T14:19:24.3874440Z [[0., 0., 0.], 2025-09-09T14:19:24.3874750Z [0., 0., 0.], 2025-09-09T14:19:24.3874971Z [0., 0., 0.]]], 2025-09-09T14:19:24.3875123Z 2025-09-09T14:19:24.3875127Z 2025-09-09T14:19:24.3875264Z [[[0., 0., 0.], 2025-09-09T14:19:24.3875495Z [0., 0., 0.], 2025-09-09T14:19:24.3875714Z [0., 0., 0.]], 2025-09-09T14:19:24.3875881Z 2025-09-09T14:19:24.3875962Z [[0., 0., 0.], 2025-09-09T14:19:24.3876176Z [0., 0., 0.], 2025-09-09T14:19:24.3876404Z [0., 0., 0.]], 2025-09-09T14:19:24.3876552Z 2025-09-09T14:19:24.3876642Z [[0., 0., 0.], 2025-09-09T14:19:24.3876856Z [0., 0., 0.], 2025-09-09T14:19:24.3877119Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:19:24.3877451Z converted model pt2e: GraphModule( 2025-09-09T14:19:24.3877744Z (conv): Module() 2025-09-09T14:19:24.3877961Z (bn): Module() 2025-09-09T14:19:24.3878175Z ) 2025-09-09T14:19:24.3878277Z 2025-09-09T14:19:24.3878281Z 2025-09-09T14:19:24.3878285Z 2025-09-09T14:19:24.3878374Z def forward(self, x): 2025-09-09T14:19:24.3878687Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:24.3879523Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:24.3880964Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:24.3881982Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:19:24.3882882Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.001507229870185256, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:19:24.3883807Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:19:24.3884764Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_weight_bias = None 2025-09-09T14:19:24.3886196Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.01999657228589058, 1, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:19:24.3887784Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.01999657228589058, 1, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:19:24.3888959Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:19:24.3889419Z 2025-09-09T14:19:24.3889734Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:24.3890144Z onverted model fx: GraphModule( 2025-09-09T14:19:24.3890645Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:24.3891062Z ) 2025-09-09T14:19:24.3891174Z 2025-09-09T14:19:24.3891178Z 2025-09-09T14:19:24.3891181Z 2025-09-09T14:19:24.3891271Z def forward(self, x): 2025-09-09T14:19:24.3891982Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:24.3893423Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:24.3894587Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:19:24.3895558Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.01999657228589058, 1, -128, 127, torch.int8); conv = None 2025-09-09T14:19:24.3897011Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.01999657228589058, 1, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:24.3898024Z return dequantize_per_tensor_default_1 2025-09-09T14:19:24.3898316Z 2025-09-09T14:19:24.3898626Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:24.3899029Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:19:24.3899291Z [0., 0., 0.], 2025-09-09T14:19:24.3899524Z [0., 0., 0.]], 2025-09-09T14:19:24.3899672Z 2025-09-09T14:19:24.3899751Z [[0., 0., 0.], 2025-09-09T14:19:24.3899982Z [0., 0., 0.], 2025-09-09T14:19:24.3900197Z [0., 0., 0.]], 2025-09-09T14:19:24.3900358Z 2025-09-09T14:19:24.3900437Z [[0., 0., 0.], 2025-09-09T14:19:24.3900653Z [0., 0., 0.], 2025-09-09T14:19:24.3900884Z [0., 0., 0.]]], 2025-09-09T14:19:24.3901039Z 2025-09-09T14:19:24.3901043Z 2025-09-09T14:19:24.3901136Z [[[0., 0., 0.], 2025-09-09T14:19:24.3901352Z [0., 0., 0.], 2025-09-09T14:19:24.3901585Z [0., 0., 0.]], 2025-09-09T14:19:24.3901733Z 2025-09-09T14:19:24.3901813Z [[0., 0., 0.], 2025-09-09T14:19:24.3902045Z [0., 0., 0.], 2025-09-09T14:19:24.3902264Z [0., 0., 0.]], 2025-09-09T14:19:24.3902446Z 2025-09-09T14:19:24.3902526Z [[0., 0., 0.], 2025-09-09T14:19:24.3902759Z [0., 0., 0.], 2025-09-09T14:19:24.3902978Z [0., 0., 0.]]], 2025-09-09T14:19:24.3903145Z 2025-09-09T14:19:24.3903149Z 2025-09-09T14:19:24.3903231Z [[[0., 0., 0.], 2025-09-09T14:19:24.3903466Z [0., 0., 0.], 2025-09-09T14:19:24.3903687Z [0., 0., 0.]], 2025-09-09T14:19:24.3903835Z 2025-09-09T14:19:24.3903929Z [[0., 0., 0.], 2025-09-09T14:19:24.3904151Z [0., 0., 0.], 2025-09-09T14:19:24.3904385Z [0., 0., 0.]], 2025-09-09T14:19:24.3904535Z 2025-09-09T14:19:24.3904622Z [[0., 0., 0.], 2025-09-09T14:19:24.3904857Z [0., 0., 0.], 2025-09-09T14:19:24.3905080Z [0., 0., 0.]]]]) 2025-09-09T14:19:24.3905343Z model pt2e: GraphModule( 2025-09-09T14:19:24.3905588Z (conv1): Module() 2025-09-09T14:19:24.3905819Z (bn1): Module() 2025-09-09T14:19:24.3906045Z (conv2): Module() 2025-09-09T14:19:24.3906254Z (bn2): Module() 2025-09-09T14:19:24.3906584Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:24.3907734Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:24.3909041Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:24.3909616Z ) 2025-09-09T14:19:24.3910124Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:24.3911457Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:24.3912942Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1469, -0.1921, -0.1853]), max_val=tensor([0.1307, 0.1779, 0.1810])) 2025-09-09T14:19:24.3913689Z ) 2025-09-09T14:19:24.3913979Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:24.3915183Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:24.3916678Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:19:24.3917412Z ) 2025-09-09T14:19:24.3917714Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:24.3918789Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0192]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:24.3920044Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.725703001022339, max_val=2.165140390396118) 2025-09-09T14:19:24.3920620Z ) 2025-09-09T14:19:24.3920913Z (activation_post_process_4): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:24.3921988Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0106]), zero_point=tensor([-2], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:24.3923267Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3412710428237915, max_val=1.3707175254821777) 2025-09-09T14:19:24.3923848Z ) 2025-09-09T14:19:24.3924037Z ) 2025-09-09T14:19:24.3924136Z 2025-09-09T14:19:24.3924141Z 2025-09-09T14:19:24.3924145Z 2025-09-09T14:19:24.3924236Z def forward(self, x): 2025-09-09T14:19:24.3924556Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:24.3924924Z conv1_weight = self.conv1.weight 2025-09-09T14:19:24.3925242Z bn1_weight = self.bn1.weight 2025-09-09T14:19:24.3925532Z bn1_bias = self.bn1.bias 2025-09-09T14:19:24.3925799Z conv2_weight = self.conv2.weight 2025-09-09T14:19:24.3926108Z conv2_bias = self.conv2.bias 2025-09-09T14:19:24.3926385Z bn2_weight = self.bn2.weight 2025-09-09T14:19:24.3926668Z bn2_bias = self.bn2.bias 2025-09-09T14:19:24.3926950Z bn1_running_mean = self.bn1.running_mean 2025-09-09T14:19:24.3927290Z bn1_running_var = self.bn1.running_var 2025-09-09T14:19:24.3927654Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:19:24.3928049Z bn2_running_mean = self.bn2.running_mean 2025-09-09T14:19:24.3928378Z bn2_running_var = self.bn2.running_var 2025-09-09T14:19:24.3928757Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:19:24.3929259Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:24.3930027Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:19:24.3930799Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:19:24.3931402Z add = torch.ops.aten.add.Tensor(bn2_running_var, 1e-05) 2025-09-09T14:19:24.3931845Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:19:24.3932313Z div = torch.ops.aten.div.Tensor(bn2_weight, sqrt); sqrt = None 2025-09-09T14:19:24.3932804Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:19:24.3933443Z mul = torch.ops.aten.mul.Tensor(conv2_weight, reshape); conv2_weight = reshape = None 2025-09-09T14:19:24.3934070Z activation_post_process_3 = self.activation_post_process_3(mul); mul = None 2025-09-09T14:19:33.8402238Z zeros_like = torch.ops.aten.zeros_like.default(conv2_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:19:33.8403200Z add_2 = torch.ops.aten.add.Tensor(bn1_running_var, 1e-05) 2025-09-09T14:19:33.8403692Z sqrt_1 = torch.ops.aten.sqrt.default(add_2); add_2 = None 2025-09-09T14:19:33.8404173Z div_2 = torch.ops.aten.div.Tensor(bn1_weight, sqrt_1); sqrt_1 = None 2025-09-09T14:19:33.8404697Z reshape_3 = torch.ops.aten.reshape.default(div_2, [-1, 1, 1, 1]) 2025-09-09T14:19:33.8405287Z mul_1 = torch.ops.aten.mul.Tensor(conv1_weight, reshape_3); conv1_weight = reshape_3 = None 2025-09-09T14:19:33.8405962Z activation_post_process_1 = self.activation_post_process_1(mul_1); mul_1 = None 2025-09-09T14:19:33.8406937Z conv2d_3 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:19:33.8407899Z reshape_4 = torch.ops.aten.reshape.default(div_2, [1, -1, 1, 1]); div_2 = None 2025-09-09T14:19:33.8408522Z div_3 = torch.ops.aten.div.Tensor(conv2d_3, reshape_4); conv2d_3 = reshape_4 = None 2025-09-09T14:19:33.8409572Z batch_norm_3 = torch.ops.aten.batch_norm.default(div_3, bn1_weight, bn1_bias, bn1_running_mean, bn1_running_var, True, 0.1, 1e-05, True); div_3 = bn1_weight = bn1_bias = bn1_running_mean = bn1_running_var = None 2025-09-09T14:19:33.8410838Z activation_post_process_2 = self.activation_post_process_2(batch_norm_3); batch_norm_3 = None 2025-09-09T14:19:33.8411948Z conv2d_2 = torch.ops.aten.conv2d.default(activation_post_process_2, activation_post_process_3, zeros_like); activation_post_process_2 = activation_post_process_3 = zeros_like = None 2025-09-09T14:19:33.8412957Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:19:33.8413611Z div_1 = torch.ops.aten.div.Tensor(conv2d_2, reshape_1); conv2d_2 = reshape_1 = None 2025-09-09T14:19:33.8414273Z reshape_2 = torch.ops.aten.reshape.default(conv2_bias, [1, -1, 1, 1]); conv2_bias = None 2025-09-09T14:19:33.8414904Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:19:33.8415932Z batch_norm_2 = torch.ops.aten.batch_norm.default(add_1, bn2_weight, bn2_bias, bn2_running_mean, bn2_running_var, True, 0.1, 1e-05, True); add_1 = bn2_weight = bn2_bias = bn2_running_mean = bn2_running_var = None 2025-09-09T14:19:33.8417032Z activation_post_process_4 = self.activation_post_process_4(batch_norm_2); batch_norm_2 = None 2025-09-09T14:19:33.8417698Z return pytree.tree_unflatten((activation_post_process_4,), self._out_spec) 2025-09-09T14:19:33.8418137Z 2025-09-09T14:19:33.8418432Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:33.8418847Z model fx: GraphModule( 2025-09-09T14:19:33.8419186Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:33.8420267Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:33.8421823Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:33.8422400Z ) 2025-09-09T14:19:33.8422602Z (conv1): ConvBn2d( 2025-09-09T14:19:33.8422872Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:19:33.8423364Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:33.8423889Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:33.8424989Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:33.8426587Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:19:33.8427314Z ) 2025-09-09T14:19:33.8427506Z ) 2025-09-09T14:19:33.8427811Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:33.8428879Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0192]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:33.8430154Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.725703001022339, max_val=2.165140390396118) 2025-09-09T14:19:33.8430721Z ) 2025-09-09T14:19:33.8430921Z (conv2): ConvBn2d( 2025-09-09T14:19:33.8431171Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:19:33.8431632Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:33.8432159Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:33.8433252Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0012, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:33.8434825Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1469, -0.1921, -0.1853]), max_val=tensor([0.1307, 0.1779, 0.1810])) 2025-09-09T14:19:33.8435563Z ) 2025-09-09T14:19:33.8435754Z ) 2025-09-09T14:19:33.8436063Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:33.8437132Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0106]), zero_point=tensor([-2], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:33.8438416Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.3412710428237915, max_val=1.3707175254821777) 2025-09-09T14:19:33.8438993Z ) 2025-09-09T14:19:33.8439187Z ) 2025-09-09T14:19:33.8439290Z 2025-09-09T14:19:33.8439295Z 2025-09-09T14:19:33.8439299Z 2025-09-09T14:19:33.8439403Z def forward(self, x): 2025-09-09T14:19:33.8439783Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:33.8440394Z conv1 = self.conv1(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:19:33.8441017Z activation_post_process_1 = self.activation_post_process_1(conv1); conv1 = None 2025-09-09T14:19:33.8441646Z conv2 = self.conv2(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:19:33.8442265Z activation_post_process_2 = self.activation_post_process_2(conv2); conv2 = None 2025-09-09T14:19:33.8442758Z return activation_post_process_2 2025-09-09T14:19:33.8443046Z 2025-09-09T14:19:33.8443340Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:33.8443745Z diff: tensor([[[[0.]], 2025-09-09T14:19:33.8443897Z 2025-09-09T14:19:33.8443976Z [[0.]], 2025-09-09T14:19:33.8444120Z 2025-09-09T14:19:33.8444200Z [[0.]]], 2025-09-09T14:19:33.8444328Z 2025-09-09T14:19:33.8444409Z 2025-09-09T14:19:33.8444489Z [[[0.]], 2025-09-09T14:19:33.8444627Z 2025-09-09T14:19:33.8444705Z [[0.]], 2025-09-09T14:19:33.8444830Z 2025-09-09T14:19:33.8444920Z [[0.]]], 2025-09-09T14:19:33.8445047Z 2025-09-09T14:19:33.8445051Z 2025-09-09T14:19:33.8445130Z [[[0.]], 2025-09-09T14:19:33.8445268Z 2025-09-09T14:19:33.8445345Z [[0.]], 2025-09-09T14:19:33.8445469Z 2025-09-09T14:19:33.8445575Z [[0.]]]], grad_fn=) 2025-09-09T14:19:33.8445904Z converted model pt2e: GraphModule( 2025-09-09T14:19:33.8446298Z (conv1): Module() 2025-09-09T14:19:33.8446526Z (bn1): Module() 2025-09-09T14:19:33.8446750Z (conv2): Module() 2025-09-09T14:19:33.8446958Z (bn2): Module() 2025-09-09T14:19:33.8447171Z ) 2025-09-09T14:19:33.8447274Z 2025-09-09T14:19:33.8447278Z 2025-09-09T14:19:33.8447282Z 2025-09-09T14:19:33.8447370Z def forward(self, x): 2025-09-09T14:19:33.8447678Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:33.8448047Z conv2_bias = self.conv2.bias 2025-09-09T14:19:33.8448797Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:33.8450242Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:33.8451220Z _scale_0 = self._scale_0 2025-09-09T14:19:33.8451509Z _zero_point_0 = self._zero_point_0 2025-09-09T14:19:33.8451800Z _scale_1 = self._scale_1 2025-09-09T14:19:33.8452076Z _zero_point_1 = self._zero_point_1 2025-09-09T14:19:33.8452400Z quantize_per_channel_1 = self._frozen_param0 2025-09-09T14:19:33.8453438Z dequantize_per_channel_1 = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_1, _scale_1, _zero_point_1, 0, -127, 127, torch.int8); quantize_per_channel_1 = _scale_1 = _zero_point_1 = None 2025-09-09T14:19:33.8454469Z conv1_weight_bias = self.conv1.weight_bias 2025-09-09T14:19:33.8455450Z conv2d_5 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel_1, conv1_weight_bias); dequantize_per_tensor_default = dequantize_per_channel_1 = conv1_weight_bias = None 2025-09-09T14:19:33.8456918Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_5, 0.019179778173565865, 14, -128, 127, torch.int8); conv2d_5 = None 2025-09-09T14:19:33.8458425Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.019179778173565865, 14, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:33.8459436Z quantize_per_channel = self._frozen_param1 2025-09-09T14:19:33.8460450Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:19:33.8462029Z conv2d_4 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_1, dequantize_per_channel, conv2_bias); dequantize_per_tensor_default_1 = dequantize_per_channel = conv2_bias = None 2025-09-09T14:19:33.8463429Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_4, 0.010635248385369778, -2, -128, 127, torch.int8); conv2d_4 = None 2025-09-09T14:19:33.8464936Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010635248385369778, -2, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:19:33.8466083Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:19:33.8466551Z 2025-09-09T14:19:35.3481449Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:35.3482141Z onverted model fx: GraphModule( 2025-09-09T14:19:35.3482739Z (conv1): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:35.3483326Z (conv2): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:35.3483770Z ) 2025-09-09T14:19:35.3483874Z 2025-09-09T14:19:35.3483879Z 2025-09-09T14:19:35.3483883Z 2025-09-09T14:19:35.3483986Z def forward(self, x): 2025-09-09T14:19:35.3484690Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:35.3486267Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:35.3487429Z conv1 = self.conv1(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:19:35.3488433Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1, 0.019179778173565865, 14, -128, 127, torch.int8); conv1 = None 2025-09-09T14:19:35.3489923Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.019179778173565865, 14, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:35.3491112Z conv2 = self.conv2(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:19:35.3492119Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2, 0.010635248385369778, -2, -128, 127, torch.int8); conv2 = None 2025-09-09T14:19:35.3493603Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010635248385369778, -2, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:19:35.3494619Z return dequantize_per_tensor_default_2 2025-09-09T14:19:35.3494926Z 2025-09-09T14:19:35.3495221Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:35.3495627Z diff: tensor([[[[0.]], 2025-09-09T14:19:35.3495779Z 2025-09-09T14:19:35.3495870Z [[0.]], 2025-09-09T14:19:35.3495997Z 2025-09-09T14:19:35.3496075Z [[0.]]], 2025-09-09T14:19:35.3496203Z 2025-09-09T14:19:35.3496207Z 2025-09-09T14:19:35.3496295Z [[[0.]], 2025-09-09T14:19:35.3496419Z 2025-09-09T14:19:35.3496498Z [[0.]], 2025-09-09T14:19:35.3496633Z 2025-09-09T14:19:35.3496709Z [[0.]]], 2025-09-09T14:19:35.3496833Z 2025-09-09T14:19:35.3496837Z 2025-09-09T14:19:35.3496942Z [[[0.]], 2025-09-09T14:19:35.3497067Z 2025-09-09T14:19:35.3497146Z [[0.]], 2025-09-09T14:19:35.3497281Z 2025-09-09T14:19:35.3497361Z [[0.]]]]) 2025-09-09T14:19:35.3497585Z model pt2e: GraphModule( 2025-09-09T14:19:35.3497850Z (conv1): Module() 2025-09-09T14:19:35.3498082Z (bn1): Module() 2025-09-09T14:19:35.3498291Z (conv2): Module() 2025-09-09T14:19:35.3498515Z (bn2): Module() 2025-09-09T14:19:35.3498834Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3499923Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:35.3501302Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:35.3501886Z ) 2025-09-09T14:19:35.3502194Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3503266Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:35.3504643Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19212442636489868, max_val=0.18097376823425293) 2025-09-09T14:19:35.3505242Z ) 2025-09-09T14:19:35.3505536Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3506627Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:35.3507960Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127479195594788, max_val=0.1870359182357788) 2025-09-09T14:19:35.3508553Z ) 2025-09-09T14:19:35.3508860Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3510092Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0192]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:35.3511362Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.725703001022339, max_val=2.165140390396118) 2025-09-09T14:19:35.3511928Z ) 2025-09-09T14:19:35.3512231Z (activation_post_process_4): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3513300Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0107]), zero_point=tensor([-2], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:35.3514546Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.349876046180725, max_val=1.373764157295227) 2025-09-09T14:19:35.3515203Z ) 2025-09-09T14:19:35.3515377Z ) 2025-09-09T14:19:35.3515492Z 2025-09-09T14:19:35.3515496Z 2025-09-09T14:19:35.3515500Z 2025-09-09T14:19:35.3515587Z def forward(self, x): 2025-09-09T14:19:35.3515903Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:35.3516278Z conv1_weight = self.conv1.weight 2025-09-09T14:19:35.3516592Z bn1_weight = self.bn1.weight 2025-09-09T14:19:35.3516868Z bn1_bias = self.bn1.bias 2025-09-09T14:19:35.3517153Z conv2_weight = self.conv2.weight 2025-09-09T14:19:35.3517448Z conv2_bias = self.conv2.bias 2025-09-09T14:19:35.3517739Z bn2_weight = self.bn2.weight 2025-09-09T14:19:35.3518013Z bn2_bias = self.bn2.bias 2025-09-09T14:19:35.3518310Z bn1_running_mean = self.bn1.running_mean 2025-09-09T14:19:35.3518641Z bn1_running_var = self.bn1.running_var 2025-09-09T14:19:35.3519024Z bn1_num_batches_tracked = self.bn1.num_batches_tracked 2025-09-09T14:19:35.3519414Z bn2_running_mean = self.bn2.running_mean 2025-09-09T14:19:35.3519738Z bn2_running_var = self.bn2.running_var 2025-09-09T14:19:35.3520109Z bn2_num_batches_tracked = self.bn2.num_batches_tracked 2025-09-09T14:19:35.3520595Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:35.3521269Z add_ = torch.ops.aten.add_.Tensor(bn1_num_batches_tracked, 1); bn1_num_batches_tracked = add_ = None 2025-09-09T14:19:35.3522017Z add__1 = torch.ops.aten.add_.Tensor(bn2_num_batches_tracked, 1); bn2_num_batches_tracked = add__1 = None 2025-09-09T14:19:35.3522626Z add = torch.ops.aten.add.Tensor(bn2_running_var, 1e-05) 2025-09-09T14:19:35.3523062Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:19:35.3523508Z div = torch.ops.aten.div.Tensor(bn2_weight, sqrt); sqrt = None 2025-09-09T14:19:35.3524013Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:19:35.3524579Z mul = torch.ops.aten.mul.Tensor(conv2_weight, reshape); conv2_weight = reshape = None 2025-09-09T14:19:35.3525225Z activation_post_process_3 = self.activation_post_process_3(mul); mul = None 2025-09-09T14:19:35.3525914Z zeros_like = torch.ops.aten.zeros_like.default(conv2_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:19:35.3526657Z add_2 = torch.ops.aten.add.Tensor(bn1_running_var, 1e-05) 2025-09-09T14:19:35.3527119Z sqrt_1 = torch.ops.aten.sqrt.default(add_2); add_2 = None 2025-09-09T14:19:35.3527598Z div_2 = torch.ops.aten.div.Tensor(bn1_weight, sqrt_1); sqrt_1 = None 2025-09-09T14:19:35.3528124Z reshape_3 = torch.ops.aten.reshape.default(div_2, [-1, 1, 1, 1]) 2025-09-09T14:19:35.3528715Z mul_1 = torch.ops.aten.mul.Tensor(conv1_weight, reshape_3); conv1_weight = reshape_3 = None 2025-09-09T14:19:35.3529388Z activation_post_process_1 = self.activation_post_process_1(mul_1); mul_1 = None 2025-09-09T14:19:35.3530440Z conv2d_3 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:19:35.3531391Z reshape_4 = torch.ops.aten.reshape.default(div_2, [1, -1, 1, 1]); div_2 = None 2025-09-09T14:19:35.3532017Z div_3 = torch.ops.aten.div.Tensor(conv2d_3, reshape_4); conv2d_3 = reshape_4 = None 2025-09-09T14:19:35.3533062Z batch_norm_3 = torch.ops.aten.batch_norm.default(div_3, bn1_weight, bn1_bias, bn1_running_mean, bn1_running_var, True, 0.1, 1e-05, True); div_3 = bn1_weight = bn1_bias = bn1_running_mean = bn1_running_var = None 2025-09-09T14:19:35.3534169Z activation_post_process_2 = self.activation_post_process_2(batch_norm_3); batch_norm_3 = None 2025-09-09T14:19:35.3535268Z conv2d_2 = torch.ops.aten.conv2d.default(activation_post_process_2, activation_post_process_3, zeros_like); activation_post_process_2 = activation_post_process_3 = zeros_like = None 2025-09-09T14:19:35.3536274Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:19:35.3536885Z div_1 = torch.ops.aten.div.Tensor(conv2d_2, reshape_1); conv2d_2 = reshape_1 = None 2025-09-09T14:19:35.3537538Z reshape_2 = torch.ops.aten.reshape.default(conv2_bias, [1, -1, 1, 1]); conv2_bias = None 2025-09-09T14:19:35.3538172Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:19:35.3539197Z batch_norm_2 = torch.ops.aten.batch_norm.default(add_1, bn2_weight, bn2_bias, bn2_running_mean, bn2_running_var, True, 0.1, 1e-05, True); add_1 = bn2_weight = bn2_bias = bn2_running_mean = bn2_running_var = None 2025-09-09T14:19:35.3540286Z activation_post_process_4 = self.activation_post_process_4(batch_norm_2); batch_norm_2 = None 2025-09-09T14:19:35.3540958Z return pytree.tree_unflatten((activation_post_process_4,), self._out_spec) 2025-09-09T14:19:35.3541386Z 2025-09-09T14:19:35.3541696Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:35.3542103Z model fx: GraphModule( 2025-09-09T14:19:35.3542443Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:35.3543518Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0195]), zero_point=tensor([-13], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:35.3544787Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.247849225997925, max_val=2.7226178646087646) 2025-09-09T14:19:35.3545372Z ) 2025-09-09T14:19:35.3545570Z (conv1): ConvBn2d( 2025-09-09T14:19:35.3545840Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:19:58.4594698Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:58.4595644Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4597487Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:58.4616881Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127479195594788, max_val=0.1870359182357788) 2025-09-09T14:19:58.4618199Z ) 2025-09-09T14:19:58.4618490Z ) 2025-09-09T14:19:58.4619405Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4621232Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0192]), zero_point=tensor([14], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:58.4623485Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.725703001022339, max_val=2.165140390396118) 2025-09-09T14:19:58.4624563Z ) 2025-09-09T14:19:58.4624874Z (conv2): ConvBn2d( 2025-09-09T14:19:58.4626708Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:19:58.4627550Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:19:58.4628534Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4630539Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:19:58.4633129Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19212442636489868, max_val=0.18097376823425293) 2025-09-09T14:19:58.4634233Z ) 2025-09-09T14:19:58.4634538Z ) 2025-09-09T14:19:58.4635266Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4637298Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0107]), zero_point=tensor([-2], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:58.4639729Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.349876046180725, max_val=1.373764157295227) 2025-09-09T14:19:58.4640886Z ) 2025-09-09T14:19:58.4641226Z ) 2025-09-09T14:19:58.4641436Z 2025-09-09T14:19:58.4641444Z 2025-09-09T14:19:58.4641451Z 2025-09-09T14:19:58.4641618Z def forward(self, x): 2025-09-09T14:19:58.4642352Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:19:58.4643413Z conv1 = self.conv1(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:19:58.4644462Z activation_post_process_1 = self.activation_post_process_1(conv1); conv1 = None 2025-09-09T14:19:58.4645601Z conv2 = self.conv2(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:19:58.4646822Z activation_post_process_2 = self.activation_post_process_2(conv2); conv2 = None 2025-09-09T14:19:58.4647684Z return activation_post_process_2 2025-09-09T14:19:58.4648120Z 2025-09-09T14:19:58.4648644Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:58.4649322Z diff: tensor([[[[0.]], 2025-09-09T14:19:58.4649601Z 2025-09-09T14:19:58.4649719Z [[0.]], 2025-09-09T14:19:58.4649902Z 2025-09-09T14:19:58.4650017Z [[0.]]], 2025-09-09T14:19:58.4650241Z 2025-09-09T14:19:58.4650248Z 2025-09-09T14:19:58.4650393Z [[[0.]], 2025-09-09T14:19:58.4650599Z 2025-09-09T14:19:58.4650722Z [[0.]], 2025-09-09T14:19:58.4650933Z 2025-09-09T14:19:58.4651075Z [[0.]]], 2025-09-09T14:19:58.4651284Z 2025-09-09T14:19:58.4651291Z 2025-09-09T14:19:58.4651442Z [[[0.]], 2025-09-09T14:19:58.4651640Z 2025-09-09T14:19:58.4651777Z [[0.]], 2025-09-09T14:19:58.4652013Z 2025-09-09T14:19:58.4652203Z [[0.]]]], grad_fn=) 2025-09-09T14:19:58.4652676Z converted model pt2e: GraphModule( 2025-09-09T14:19:58.4653108Z (conv1): Module() 2025-09-09T14:19:58.4653460Z (bn1): Module() 2025-09-09T14:19:58.4653793Z (conv2): Module() 2025-09-09T14:19:58.4654157Z (bn2): Module() 2025-09-09T14:19:58.4654445Z ) 2025-09-09T14:19:58.4654586Z 2025-09-09T14:19:58.4654591Z 2025-09-09T14:19:58.4654612Z 2025-09-09T14:19:58.4654736Z def forward(self, x): 2025-09-09T14:19:58.4655182Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:19:58.4655987Z conv2_bias = self.conv2.bias 2025-09-09T14:19:58.4657232Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:58.4659638Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:58.4661405Z quantize_per_tensor_1 = self._frozen_param0 2025-09-09T14:19:58.4662957Z dequantize_per_tensor_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_1, 0.0015061007579788566, 0, -127, 127, torch.int8); quantize_per_tensor_1 = None 2025-09-09T14:19:58.4664426Z conv1_weight_bias = self.conv1.weight_bias 2025-09-09T14:19:58.4665904Z conv2d_5 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor_1, conv1_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor_1 = conv1_weight_bias = None 2025-09-09T14:19:58.4668376Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_5, 0.019179778173565865, 14, -128, 127, torch.int8); conv2d_5 = None 2025-09-09T14:19:58.4671130Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.019179778173565865, 14, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:19:58.4673093Z quantize_per_tensor = self._frozen_param1 2025-09-09T14:19:58.4674566Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.001512790797278285, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:19:58.4677150Z conv2d_4 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_3, dequantize_per_tensor, conv2_bias); dequantize_per_tensor_default_3 = dequantize_per_tensor = conv2_bias = None 2025-09-09T14:19:58.4679362Z quantize_per_tensor_default_4 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_4, 0.010680941864848137, -2, -128, 127, torch.int8); conv2d_4 = None 2025-09-09T14:19:58.4681882Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_4, 0.010680941864848137, -2, -128, 127, torch.int8); quantize_per_tensor_default_4 = None 2025-09-09T14:19:58.4683999Z return pytree.tree_unflatten((dequantize_per_tensor_default_4,), self._out_spec) 2025-09-09T14:19:58.4684830Z 2025-09-09T14:19:58.4685373Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:58.4686121Z onverted model fx: GraphModule( 2025-09-09T14:19:58.4687000Z (conv1): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:58.4687881Z (conv2): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:19:58.4688598Z ) 2025-09-09T14:19:58.4688755Z 2025-09-09T14:19:58.4688760Z 2025-09-09T14:19:58.4688797Z 2025-09-09T14:19:58.4688932Z def forward(self, x): 2025-09-09T14:19:58.4690003Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.019492028281092644, -13, -128, 127, torch.int8); x = None 2025-09-09T14:19:58.4692310Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.019492028281092644, -13, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:19:58.4694209Z conv1 = self.conv1(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:19:58.4695860Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv1, 0.019179778173565865, 14, -128, 127, torch.int8); conv1 = None 2025-09-09T14:19:58.4698502Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.019179778173565865, 14, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:19:58.4700435Z conv2 = self.conv2(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:19:58.4702051Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2, 0.010680941864848137, -2, -128, 127, torch.int8); conv2 = None 2025-09-09T14:19:58.4704540Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.010680941864848137, -2, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:19:58.4706386Z return dequantize_per_tensor_default_2 2025-09-09T14:19:58.4706850Z 2025-09-09T14:19:58.4707315Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:19:58.4707944Z diff: tensor([[[[0.]], 2025-09-09T14:19:58.4708179Z 2025-09-09T14:19:58.4708322Z [[0.]], 2025-09-09T14:19:58.4708517Z 2025-09-09T14:19:58.4708629Z [[0.]]], 2025-09-09T14:19:58.4708853Z 2025-09-09T14:19:58.4708860Z 2025-09-09T14:19:58.4708981Z [[[0.]], 2025-09-09T14:19:58.4709175Z 2025-09-09T14:19:58.4709294Z [[0.]], 2025-09-09T14:19:58.4709506Z 2025-09-09T14:19:58.4709639Z [[0.]]], 2025-09-09T14:19:58.4709836Z 2025-09-09T14:19:58.4709841Z 2025-09-09T14:19:58.4710222Z [[[0.]], 2025-09-09T14:19:58.4710427Z 2025-09-09T14:19:58.4710553Z [[0.]], 2025-09-09T14:19:58.4710770Z 2025-09-09T14:19:58.4710886Z [[0.]]]]) 2025-09-09T14:19:58.4711480Z PASSED 2025-09-09T14:19:58.4712730Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_per_channel_weight_bias PASSED 2025-09-09T14:19:58.4714553Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_relu_fusion model pt2e: GraphModule( 2025-09-09T14:19:58.4715834Z (conv): Module() 2025-09-09T14:19:58.4716199Z (bn): Module() 2025-09-09T14:19:58.4716710Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4718396Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:19:58.4720423Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:19:58.4721275Z ) 2025-09-09T14:19:58.4721747Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:19:58.4723557Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:19:58.4726040Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1761, -0.1923, -0.1707]), max_val=tensor([0.1830, 0.1717, 0.1892])) 2025-09-09T14:20:09.8406443Z ) 2025-09-09T14:20:09.8406993Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:09.8408127Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0065]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:09.8410426Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.6655889749526978) 2025-09-09T14:20:09.8411000Z ) 2025-09-09T14:20:09.8411193Z ) 2025-09-09T14:20:09.8411295Z 2025-09-09T14:20:09.8411300Z 2025-09-09T14:20:09.8411305Z 2025-09-09T14:20:09.8411395Z def forward(self, x): 2025-09-09T14:20:09.8411711Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:09.8412081Z conv_weight = self.conv.weight 2025-09-09T14:20:09.8412384Z conv_bias = self.conv.bias 2025-09-09T14:20:09.8412956Z bn_weight = self.bn.weight 2025-09-09T14:20:09.8413227Z bn_bias = self.bn.bias 2025-09-09T14:20:09.8413513Z bn_running_mean = self.bn.running_mean 2025-09-09T14:20:09.8413837Z bn_running_var = self.bn.running_var 2025-09-09T14:20:09.8414208Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:20:09.8414693Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:09.8415369Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:20:09.8416211Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:20:09.8416695Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:20:09.8417452Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:20:09.8418205Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:20:09.8418836Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:20:09.8419857Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:20:09.8420561Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:20:09.8421685Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:20:09.8422710Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:20:09.8423331Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:20:09.8423981Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:20:09.8424615Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:20:09.8425615Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:20:09.8426572Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:20:09.8427163Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:09.8427761Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:09.8428200Z 2025-09-09T14:20:09.8428495Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:09.8428912Z model fx: GraphModule( 2025-09-09T14:20:09.8429263Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:09.8430331Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:09.8431613Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:09.8432182Z ) 2025-09-09T14:20:09.8432389Z (conv): ConvBnReLU2d( 2025-09-09T14:20:09.8432649Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:20:09.8433121Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:20:09.8433647Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:09.8434823Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:09.8436336Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1761, -0.1923, -0.1707]), max_val=tensor([0.1830, 0.1717, 0.1892])) 2025-09-09T14:20:09.8437085Z ) 2025-09-09T14:20:09.8437265Z ) 2025-09-09T14:20:09.8437708Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:09.8438786Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0065]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:09.8440013Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.6655889749526978) 2025-09-09T14:20:09.8440541Z ) 2025-09-09T14:20:09.8440805Z ) 2025-09-09T14:20:09.8440905Z 2025-09-09T14:20:09.8440910Z 2025-09-09T14:20:09.8440914Z 2025-09-09T14:20:09.8441016Z def forward(self, x): 2025-09-09T14:20:09.8441395Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:09.8441998Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:09.8442603Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:09.8443086Z return activation_post_process_1 2025-09-09T14:20:09.8443362Z 2025-09-09T14:20:09.8443668Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:09.8444084Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:09.8444335Z [0., 0., 0.], 2025-09-09T14:20:09.8444567Z [0., 0., 0.]], 2025-09-09T14:20:09.8444718Z 2025-09-09T14:20:09.8444800Z [[0., 0., 0.], 2025-09-09T14:20:09.8445030Z [0., 0., 0.], 2025-09-09T14:20:09.8445247Z [0., 0., 0.]], 2025-09-09T14:20:09.8445414Z 2025-09-09T14:20:09.8445492Z [[0., 0., 0.], 2025-09-09T14:20:09.8445708Z [0., 0., 0.], 2025-09-09T14:20:09.8445968Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:09.8446309Z converted model pt2e: GraphModule( 2025-09-09T14:20:09.8446587Z (conv): Module() 2025-09-09T14:20:09.8446809Z (bn): Module() 2025-09-09T14:20:09.8447011Z ) 2025-09-09T14:20:09.8447112Z 2025-09-09T14:20:09.8447116Z 2025-09-09T14:20:09.8447136Z 2025-09-09T14:20:09.8447225Z def forward(self, x): 2025-09-09T14:20:09.8447524Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:09.8447895Z conv_bias = self.conv.bias 2025-09-09T14:20:09.8448616Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:09.8450060Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:09.8451055Z _scale_0 = self._scale_0 2025-09-09T14:20:09.8451334Z _zero_point_0 = self._zero_point_0 2025-09-09T14:20:09.8451675Z quantize_per_channel = self._frozen_param0 2025-09-09T14:20:09.8452683Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:20:09.8454251Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:20:09.8455249Z relu = torch.ops.aten.relu.default(conv2d_2); conv2d_2 = None 2025-09-09T14:20:09.8456145Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.006531721446663141, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:09.8457650Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.006531721446663141, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:09.8458819Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:20:09.8459291Z 2025-09-09T14:20:09.8459683Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:09.8460114Z onverted model fx: GraphModule( 2025-09-09T14:20:09.8460390Z (conv): ConvReLU2d( 2025-09-09T14:20:09.8460772Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:20:09.8461195Z (1): ReLU() 2025-09-09T14:20:09.8461396Z ) 2025-09-09T14:20:09.8461586Z ) 2025-09-09T14:20:09.8461688Z 2025-09-09T14:20:09.8461692Z 2025-09-09T14:20:09.8461696Z 2025-09-09T14:20:09.8461787Z def forward(self, x): 2025-09-09T14:20:09.8462555Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:09.8463982Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:09.8465159Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:09.8466152Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.006531721446663141, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:09.8467631Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.006531721446663141, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:09.8468657Z return dequantize_per_tensor_default_1 2025-09-09T14:20:09.8468968Z 2025-09-09T14:20:09.8469260Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:09.8469674Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:09.8469926Z [0., 0., 0.], 2025-09-09T14:20:09.8470158Z [0., 0., 0.]], 2025-09-09T14:20:09.8470308Z 2025-09-09T14:20:09.8470388Z [[0., 0., 0.], 2025-09-09T14:20:09.8470617Z [0., 0., 0.], 2025-09-09T14:20:09.8470836Z [0., 0., 0.]], 2025-09-09T14:20:09.8470997Z 2025-09-09T14:20:09.8471075Z [[0., 0., 0.], 2025-09-09T14:20:09.8471301Z [0., 0., 0.], 2025-09-09T14:20:09.8471518Z [0., 0., 0.]]]]) 2025-09-09T14:20:09.8471774Z model pt2e: GraphModule( 2025-09-09T14:20:09.8472014Z (conv): Module() 2025-09-09T14:20:09.8472236Z (bn): Module() 2025-09-09T14:20:09.8472550Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:09.8473639Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:09.8475010Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:09.8475585Z ) 2025-09-09T14:20:22.9340862Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:22.9342131Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:22.9343488Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.1923346221446991, max_val=0.18921314179897308) 2025-09-09T14:20:22.9344095Z ) 2025-09-09T14:20:22.9344415Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:22.9345532Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0065]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:22.9346795Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.6606048345565796) 2025-09-09T14:20:22.9347354Z ) 2025-09-09T14:20:22.9347539Z ) 2025-09-09T14:20:22.9347659Z 2025-09-09T14:20:22.9347664Z 2025-09-09T14:20:22.9347974Z 2025-09-09T14:20:22.9348072Z def forward(self, x): 2025-09-09T14:20:22.9348404Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:22.9348788Z conv_weight = self.conv.weight 2025-09-09T14:20:22.9349109Z conv_bias = self.conv.bias 2025-09-09T14:20:22.9349391Z bn_weight = self.bn.weight 2025-09-09T14:20:22.9349679Z bn_bias = self.bn.bias 2025-09-09T14:20:22.9349960Z bn_running_mean = self.bn.running_mean 2025-09-09T14:20:22.9350304Z bn_running_var = self.bn.running_var 2025-09-09T14:20:22.9350786Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:20:22.9351302Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:22.9351987Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:20:22.9352589Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:20:22.9353042Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:20:22.9353504Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:20:22.9354051Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:20:22.9354842Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:20:22.9355650Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:20:22.9356855Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:20:22.9358074Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:20:22.9359380Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:20:22.9360002Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:20:22.9360691Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:20:22.9361346Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:20:22.9362497Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:20:22.9363516Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:20:22.9364129Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:22.9364777Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:22.9365244Z 2025-09-09T14:20:22.9365559Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:22.9365995Z model fx: GraphModule( 2025-09-09T14:20:22.9366362Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:22.9367513Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:22.9368852Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:22.9369471Z ) 2025-09-09T14:20:22.9369693Z (conv): ConvBnReLU2d( 2025-09-09T14:20:22.9369978Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:20:22.9370473Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:20:22.9371014Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:22.9372274Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:22.9373661Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.1923346221446991, max_val=0.18921314179897308) 2025-09-09T14:20:22.9374272Z ) 2025-09-09T14:20:22.9374477Z ) 2025-09-09T14:20:22.9374905Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:22.9376033Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0065]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:22.9377367Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.6606048345565796) 2025-09-09T14:20:22.9377932Z ) 2025-09-09T14:20:22.9378130Z ) 2025-09-09T14:20:22.9378236Z 2025-09-09T14:20:22.9378240Z 2025-09-09T14:20:22.9378244Z 2025-09-09T14:20:22.9378339Z def forward(self, x): 2025-09-09T14:20:22.9378752Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:22.9379366Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:22.9380008Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:22.9380513Z return activation_post_process_1 2025-09-09T14:20:22.9380800Z 2025-09-09T14:20:22.9381117Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:22.9381533Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:22.9381814Z [0., 0., 0.], 2025-09-09T14:20:22.9382045Z [0., 0., 0.]], 2025-09-09T14:20:22.9382213Z 2025-09-09T14:20:22.9382299Z [[0., 0., 0.], 2025-09-09T14:20:22.9382522Z [0., 0., 0.], 2025-09-09T14:20:22.9382759Z [0., 0., 0.]], 2025-09-09T14:20:22.9382911Z 2025-09-09T14:20:22.9382991Z [[0., 0., 0.], 2025-09-09T14:20:22.9383228Z [0., 0., 0.], 2025-09-09T14:20:22.9383503Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:22.9383843Z converted model pt2e: GraphModule( 2025-09-09T14:20:22.9384142Z (conv): Module() 2025-09-09T14:20:22.9384362Z (bn): Module() 2025-09-09T14:20:22.9384580Z ) 2025-09-09T14:20:22.9384686Z 2025-09-09T14:20:22.9384690Z 2025-09-09T14:20:22.9384694Z 2025-09-09T14:20:22.9384784Z def forward(self, x): 2025-09-09T14:20:22.9385105Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:22.9385477Z conv_bias = self.conv.bias 2025-09-09T14:20:22.9386233Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:22.9387723Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:22.9388765Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:20:22.9389711Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0015144458739086986, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:20:22.9391219Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:20:22.9392207Z relu = torch.ops.aten.relu.default(conv2d_2); conv2d_2 = None 2025-09-09T14:20:22.9393130Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.006512175779789686, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:22.9394753Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.006512175779789686, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:20:22.9396064Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:20:22.9396553Z 2025-09-09T14:20:22.9396858Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:22.9397294Z onverted model fx: GraphModule( 2025-09-09T14:20:22.9397575Z (conv): ConvReLU2d( 2025-09-09T14:20:22.9397966Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:20:22.9398391Z (1): ReLU() 2025-09-09T14:20:22.9398612Z ) 2025-09-09T14:20:22.9398812Z ) 2025-09-09T14:20:22.9398918Z 2025-09-09T14:20:22.9399014Z 2025-09-09T14:20:22.9399018Z 2025-09-09T14:20:22.9399112Z def forward(self, x): 2025-09-09T14:20:22.9399845Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:22.9401314Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:22.9402515Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:22.9403534Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.006512175779789686, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:22.9405061Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.006512175779789686, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:22.9406119Z return dequantize_per_tensor_default_1 2025-09-09T14:20:22.9406422Z 2025-09-09T14:20:22.9406739Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:22.9407166Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:22.9407430Z [0., 0., 0.], 2025-09-09T14:20:22.9407673Z [0., 0., 0.]], 2025-09-09T14:20:22.9407826Z 2025-09-09T14:20:22.9407915Z [[0., 0., 0.], 2025-09-09T14:20:22.9408158Z [0., 0., 0.], 2025-09-09T14:20:22.9408384Z [0., 0., 0.]], 2025-09-09T14:20:22.9408549Z 2025-09-09T14:20:22.9408630Z [[0., 0., 0.], 2025-09-09T14:20:22.9408854Z [0., 0., 0.], 2025-09-09T14:20:22.9409094Z [0., 0., 0.]]]]) 2025-09-09T14:20:22.9409568Z PASSED 2025-09-09T14:20:22.9410680Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_relu_fusion_cuda SKIPPED 2025-09-09T14:20:32.7697371Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_relu_fusion_no_conv_bias model pt2e: GraphModule( 2025-09-09T14:20:32.7698473Z (conv): Module() 2025-09-09T14:20:32.7698757Z (bn): Module() 2025-09-09T14:20:32.7699184Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7700644Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:32.7702449Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:32.7703223Z ) 2025-09-09T14:20:32.7703600Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7705118Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0014]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:32.7707141Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.1912, -0.1684]), max_val=tensor([0.1914, 0.1792, 0.1824])) 2025-09-09T14:20:32.7708125Z ) 2025-09-09T14:20:32.7708517Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7710486Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0078]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:32.7712140Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.9991776943206787) 2025-09-09T14:20:32.7712858Z ) 2025-09-09T14:20:32.7713094Z ) 2025-09-09T14:20:32.7713242Z 2025-09-09T14:20:32.7713247Z 2025-09-09T14:20:32.7713252Z 2025-09-09T14:20:32.7713504Z def forward(self, x): 2025-09-09T14:20:32.7713896Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:32.7714394Z conv_weight = self.conv.weight 2025-09-09T14:20:32.7714850Z bn_weight = self.bn.weight 2025-09-09T14:20:32.7715202Z bn_bias = self.bn.bias 2025-09-09T14:20:32.7715575Z bn_running_mean = self.bn.running_mean 2025-09-09T14:20:32.7716000Z bn_running_var = self.bn.running_var 2025-09-09T14:20:32.7716492Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:20:32.7717137Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:32.7718013Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:20:32.7718825Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:20:32.7719396Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:20:32.7719982Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:20:32.7720636Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:20:32.7721375Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:20:32.7722220Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:20:32.7723493Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:32.7724915Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:20:32.7725749Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:20:32.7727133Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:20:32.7728647Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:20:32.7729456Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:32.7730283Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:32.7730889Z 2025-09-09T14:20:32.7731291Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:32.7731852Z model fx: GraphModule( 2025-09-09T14:20:32.7732319Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7733809Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:32.7735170Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:32.7735763Z ) 2025-09-09T14:20:32.7735975Z (conv): ConvBnReLU2d( 2025-09-09T14:20:32.7736266Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:32.7736768Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:20:32.7737294Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7738524Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0014]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:32.7740071Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1720, -0.1912, -0.1684]), max_val=tensor([0.1914, 0.1792, 0.1824])) 2025-09-09T14:20:32.7740831Z ) 2025-09-09T14:20:32.7741032Z ) 2025-09-09T14:20:32.7741329Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:32.7742452Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0078]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:32.7743788Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.9991776943206787) 2025-09-09T14:20:32.7744327Z ) 2025-09-09T14:20:32.7744522Z ) 2025-09-09T14:20:32.7744626Z 2025-09-09T14:20:32.7744630Z 2025-09-09T14:20:32.7744639Z 2025-09-09T14:20:32.7744732Z def forward(self, x): 2025-09-09T14:20:32.7745141Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:32.7745765Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:32.7746391Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:32.7746895Z return activation_post_process_1 2025-09-09T14:20:32.7747185Z 2025-09-09T14:20:32.7747506Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:32.7747927Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:32.7748197Z [0., 0., 0.], 2025-09-09T14:20:32.7748426Z [0., 0., 0.]], 2025-09-09T14:20:32.7748594Z 2025-09-09T14:20:32.7748679Z [[0., 0., 0.], 2025-09-09T14:20:32.7748918Z [0., 0., 0.], 2025-09-09T14:20:32.7749143Z [0., 0., 0.]], 2025-09-09T14:20:32.7749296Z 2025-09-09T14:20:32.7749397Z [[0., 0., 0.], 2025-09-09T14:20:32.7749620Z [0., 0., 0.], 2025-09-09T14:20:32.7749888Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:32.7750231Z converted model pt2e: GraphModule( 2025-09-09T14:20:32.7750534Z (conv): Module() 2025-09-09T14:20:32.7750754Z (bn): Module() 2025-09-09T14:20:32.7750975Z ) 2025-09-09T14:20:32.7751081Z 2025-09-09T14:20:32.7751085Z 2025-09-09T14:20:32.7751089Z 2025-09-09T14:20:32.7751196Z def forward(self, x): 2025-09-09T14:20:32.7751508Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:32.7752358Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:32.7753995Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:32.7755085Z _scale_0 = self._scale_0 2025-09-09T14:20:32.7755382Z _zero_point_0 = self._zero_point_0 2025-09-09T14:20:32.7755713Z quantize_per_channel = self._frozen_param0 2025-09-09T14:20:32.7756738Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:20:32.7757752Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:20:32.7758725Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_weight_bias = None 2025-09-09T14:20:32.7759773Z relu = torch.ops.aten.relu.default(conv2d_2); conv2d_2 = None 2025-09-09T14:20:32.7760651Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.007839912548661232, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:32.7762225Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007839912548661232, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:32.7763393Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:20:32.7763869Z 2025-09-09T14:20:32.7764183Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:32.7764598Z onverted model fx: GraphModule( 2025-09-09T14:20:32.7764948Z (conv): ConvReLU2d( 2025-09-09T14:20:32.7765309Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:20:32.7765733Z (1): ReLU() 2025-09-09T14:20:32.7765931Z ) 2025-09-09T14:20:32.7766126Z ) 2025-09-09T14:20:32.7766224Z 2025-09-09T14:20:32.7766228Z 2025-09-09T14:20:32.7766233Z 2025-09-09T14:20:32.7766324Z def forward(self, x): 2025-09-09T14:20:32.7767029Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:32.7768469Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:32.7769632Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:32.7770620Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.007839912548661232, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:32.7772125Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007839912548661232, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:32.7773141Z return dequantize_per_tensor_default_1 2025-09-09T14:20:32.7773451Z 2025-09-09T14:20:42.5809115Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:42.5809694Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:42.5810086Z [0., 0., 0.], 2025-09-09T14:20:42.5810423Z [0., 0., 0.]], 2025-09-09T14:20:42.5810583Z 2025-09-09T14:20:42.5810668Z [[0., 0., 0.], 2025-09-09T14:20:42.5810910Z [0., 0., 0.], 2025-09-09T14:20:42.5811134Z [0., 0., 0.]], 2025-09-09T14:20:42.5811473Z 2025-09-09T14:20:42.5811561Z [[0., 0., 0.], 2025-09-09T14:20:42.5811798Z [0., 0., 0.], 2025-09-09T14:20:42.5812148Z [0., 0., 0.]]]]) 2025-09-09T14:20:42.5812602Z model pt2e: GraphModule( 2025-09-09T14:20:42.5813080Z (conv): Module() 2025-09-09T14:20:42.5813483Z (bn): Module() 2025-09-09T14:20:42.5814089Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5815530Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:42.5816847Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:42.5817461Z ) 2025-09-09T14:20:42.5817775Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5818884Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:42.5820245Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124282896518707, max_val=0.19141820073127747) 2025-09-09T14:20:42.5820902Z ) 2025-09-09T14:20:42.5821216Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5822635Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0078]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:42.5823886Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.999093770980835) 2025-09-09T14:20:42.5824439Z ) 2025-09-09T14:20:42.5824629Z ) 2025-09-09T14:20:42.5824752Z 2025-09-09T14:20:42.5824756Z 2025-09-09T14:20:42.5824760Z 2025-09-09T14:20:42.5824853Z def forward(self, x): 2025-09-09T14:20:42.5825281Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:42.5825676Z conv_weight = self.conv.weight 2025-09-09T14:20:42.5825996Z bn_weight = self.bn.weight 2025-09-09T14:20:42.5826275Z bn_bias = self.bn.bias 2025-09-09T14:20:42.5826570Z bn_running_mean = self.bn.running_mean 2025-09-09T14:20:42.5826902Z bn_running_var = self.bn.running_var 2025-09-09T14:20:42.5827290Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:20:42.5827789Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:42.5828480Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:20:42.5829093Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:20:42.5829527Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:20:42.5829996Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:20:42.5830491Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:20:42.5831078Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:20:42.5831721Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:20:42.5832704Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, None); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:42.5833686Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:20:42.5834303Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:20:42.5835439Z batch_norm_1 = torch.ops.aten.batch_norm.default(div_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); div_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:20:42.5836427Z relu = torch.ops.aten.relu.default(batch_norm_1); batch_norm_1 = None 2025-09-09T14:20:42.5837045Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:42.5837681Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:42.5838123Z 2025-09-09T14:20:42.5838446Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:42.5838858Z model fx: GraphModule( 2025-09-09T14:20:42.5839226Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5840330Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:42.5841645Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:42.5842244Z ) 2025-09-09T14:20:42.5842443Z (conv): ConvBnReLU2d( 2025-09-09T14:20:42.5842753Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:42.5843247Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:20:42.5843791Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5844970Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:42.5846295Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124282896518707, max_val=0.19141820073127747) 2025-09-09T14:20:42.5846915Z ) 2025-09-09T14:20:42.5847099Z ) 2025-09-09T14:20:42.5847414Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:42.5848564Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0078]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:42.5849891Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.999093770980835) 2025-09-09T14:20:42.5850449Z ) 2025-09-09T14:20:42.5850637Z ) 2025-09-09T14:20:42.5850756Z 2025-09-09T14:20:42.5850760Z 2025-09-09T14:20:42.5850764Z 2025-09-09T14:20:42.5850856Z def forward(self, x): 2025-09-09T14:20:42.5851268Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:42.5851880Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:42.5852524Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:42.5853014Z return activation_post_process_1 2025-09-09T14:20:42.5853314Z 2025-09-09T14:20:42.5853616Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:42.5854041Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:42.5854306Z [0., 0., 0.], 2025-09-09T14:20:42.5854547Z [0., 0., 0.]], 2025-09-09T14:20:42.5854700Z 2025-09-09T14:20:42.5854797Z [[0., 0., 0.], 2025-09-09T14:20:42.5855021Z [0., 0., 0.], 2025-09-09T14:20:42.5855255Z [0., 0., 0.]], 2025-09-09T14:20:42.5855406Z 2025-09-09T14:20:42.5855491Z [[0., 0., 0.], 2025-09-09T14:20:42.5855723Z [0., 0., 0.], 2025-09-09T14:20:42.5855985Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:42.5856339Z converted model pt2e: GraphModule( 2025-09-09T14:20:42.5856628Z (conv): Module() 2025-09-09T14:20:42.5856857Z (bn): Module() 2025-09-09T14:20:42.5857076Z ) 2025-09-09T14:20:42.5857181Z 2025-09-09T14:20:42.5857185Z 2025-09-09T14:20:42.5857189Z 2025-09-09T14:20:42.5857282Z def forward(self, x): 2025-09-09T14:20:42.5857781Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:42.5858593Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:42.5860044Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:42.5861067Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:20:42.5861968Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.001507229870185256, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:20:42.5862889Z conv_weight_bias = self.conv.weight_bias 2025-09-09T14:20:42.5863835Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_weight_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_weight_bias = None 2025-09-09T14:20:42.5864870Z relu = torch.ops.aten.relu.default(conv2d_2); conv2d_2 = None 2025-09-09T14:20:42.5865768Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.007839583791792393, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:42.5867248Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.007839583791792393, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:20:42.5868509Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:20:42.5868985Z 2025-09-09T14:20:42.5869284Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:42.5869709Z onverted model fx: GraphModule( 2025-09-09T14:20:42.5869987Z (conv): ConvReLU2d( 2025-09-09T14:20:42.5870364Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:20:42.5870807Z (1): ReLU() 2025-09-09T14:20:42.5871012Z ) 2025-09-09T14:20:42.5871206Z ) 2025-09-09T14:20:42.5871378Z 2025-09-09T14:20:42.5871383Z 2025-09-09T14:20:42.5871386Z 2025-09-09T14:20:42.5871475Z def forward(self, x): 2025-09-09T14:20:42.5872187Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:42.5873639Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:42.5874871Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:42.5875869Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.007839583791792393, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:42.5877380Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.007839583791792393, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:42.5878408Z return dequantize_per_tensor_default_1 2025-09-09T14:20:42.5878721Z 2025-09-09T14:20:44.3025096Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:44.3025563Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:44.3025844Z [0., 0., 0.], 2025-09-09T14:20:44.3026081Z [0., 0., 0.]], 2025-09-09T14:20:44.3026253Z 2025-09-09T14:20:44.3026371Z [[0., 0., 0.], 2025-09-09T14:20:44.3026597Z [0., 0., 0.], 2025-09-09T14:20:44.3026835Z [0., 0., 0.]], 2025-09-09T14:20:44.3026988Z 2025-09-09T14:20:44.3027088Z [[0., 0., 0.], 2025-09-09T14:20:44.3027310Z [0., 0., 0.], 2025-09-09T14:20:44.3027550Z [0., 0., 0.]]]]) 2025-09-09T14:20:44.3027997Z PASSED 2025-09-09T14:20:44.3028676Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_no_bias model pt2e: GraphModule( 2025-09-09T14:20:44.3029397Z (conv): Module() 2025-09-09T14:20:44.3029743Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3030916Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:44.3032480Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1782, -0.1825, -0.1912]), max_val=tensor([0.1676, 0.1914, 0.1824])) 2025-09-09T14:20:44.3033256Z ) 2025-09-09T14:20:44.3033561Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3034843Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:44.3036768Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:44.3037571Z ) 2025-09-09T14:20:44.3037938Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3039465Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0052]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:44.3040982Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.3200514316558838) 2025-09-09T14:20:44.3041631Z ) 2025-09-09T14:20:44.3041820Z ) 2025-09-09T14:20:44.3041971Z 2025-09-09T14:20:44.3041979Z 2025-09-09T14:20:44.3047379Z 2025-09-09T14:20:44.3047499Z def forward(self, x): 2025-09-09T14:20:44.3047830Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:44.3048363Z conv_weight = self.conv.weight 2025-09-09T14:20:44.3049137Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:20:44.3049895Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:44.3050931Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:44.3051974Z relu = torch.ops.aten.relu.default(conv2d); conv2d = None 2025-09-09T14:20:44.3052542Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:44.3053254Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:44.3053771Z 2025-09-09T14:20:44.3054114Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:44.3054583Z model fx: GraphModule( 2025-09-09T14:20:44.3055017Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3056223Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:44.3057679Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:44.3058351Z ) 2025-09-09T14:20:44.3058550Z (conv): ConvReLU2d( 2025-09-09T14:20:44.3058917Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:44.3059333Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3060620Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0014, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:44.3062331Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1782, -0.1825, -0.1912]), max_val=tensor([0.1676, 0.1914, 0.1824])) 2025-09-09T14:20:44.3063179Z ) 2025-09-09T14:20:44.3063442Z ) 2025-09-09T14:20:44.3063752Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3065127Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0052]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:44.3066507Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.3200514316558838) 2025-09-09T14:20:44.3067043Z ) 2025-09-09T14:20:44.3067290Z ) 2025-09-09T14:20:44.3067396Z 2025-09-09T14:20:44.3067400Z 2025-09-09T14:20:44.3067404Z 2025-09-09T14:20:44.3067498Z def forward(self, x): 2025-09-09T14:20:44.3067913Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:44.3068581Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:44.3069271Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:44.3069828Z return activation_post_process_1 2025-09-09T14:20:44.3070114Z 2025-09-09T14:20:44.3070487Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:44.3070906Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:44.3071235Z [0., 0., 0.], 2025-09-09T14:20:44.3071579Z [0., 0., 0.]], 2025-09-09T14:20:44.3071768Z 2025-09-09T14:20:44.3071889Z [[0., 0., 0.], 2025-09-09T14:20:44.3072132Z [0., 0., 0.], 2025-09-09T14:20:44.3072360Z [0., 0., 0.]], 2025-09-09T14:20:44.3072539Z 2025-09-09T14:20:44.3072670Z [[0., 0., 0.], 2025-09-09T14:20:44.3072897Z [0., 0., 0.], 2025-09-09T14:20:44.3073170Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:44.3073574Z converted model pt2e: GraphModule( 2025-09-09T14:20:44.3073961Z (conv): Module() 2025-09-09T14:20:44.3074232Z ) 2025-09-09T14:20:44.3074348Z 2025-09-09T14:20:44.3074353Z 2025-09-09T14:20:44.3074356Z 2025-09-09T14:20:44.3074449Z def forward(self, x): 2025-09-09T14:20:44.3074866Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:44.3075267Z _scale_0 = self._scale_0 2025-09-09T14:20:44.3075563Z _zero_point_0 = self._zero_point_0 2025-09-09T14:20:44.3075991Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:20:44.3077239Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:20:44.3078902Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:44.3080439Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:44.3082044Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel_default); dequantize_per_tensor_default = dequantize_per_channel_default = None 2025-09-09T14:20:44.3083021Z relu = torch.ops.aten.relu.default(conv2d); conv2d = None 2025-09-09T14:20:44.3083977Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.005176672246307135, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:44.3085528Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.005176672246307135, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:44.3086728Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:20:44.3087219Z 2025-09-09T14:20:44.3087540Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:44.3087966Z onverted model fx: GraphModule( 2025-09-09T14:20:44.3088265Z (conv): ConvReLU2d( 2025-09-09T14:20:44.3088678Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False) 2025-09-09T14:20:44.3089159Z (1): ReLU() 2025-09-09T14:20:44.3089367Z ) 2025-09-09T14:20:44.3089565Z ) 2025-09-09T14:20:44.3089675Z 2025-09-09T14:20:44.3089680Z 2025-09-09T14:20:44.3089684Z 2025-09-09T14:20:44.3089776Z def forward(self, x): 2025-09-09T14:20:44.3090512Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:44.3092061Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:44.3093267Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:44.3094291Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.005176672246307135, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:44.3095922Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.005176672246307135, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:44.3096969Z return dequantize_per_tensor_default_1 2025-09-09T14:20:44.3097293Z 2025-09-09T14:20:44.3097602Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:44.3098027Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:44.3098285Z [0., 0., 0.], 2025-09-09T14:20:44.3098529Z [0., 0., 0.]], 2025-09-09T14:20:44.3098683Z 2025-09-09T14:20:44.3098776Z [[0., 0., 0.], 2025-09-09T14:20:44.3099071Z [0., 0., 0.], 2025-09-09T14:20:44.3099307Z [0., 0., 0.]], 2025-09-09T14:20:44.3099462Z 2025-09-09T14:20:44.3099543Z [[0., 0., 0.], 2025-09-09T14:20:44.3099952Z [0., 0., 0.], 2025-09-09T14:20:44.3100173Z [0., 0., 0.]]]]) 2025-09-09T14:20:44.3100432Z model pt2e: GraphModule( 2025-09-09T14:20:44.3100677Z (conv): Module() 2025-09-09T14:20:44.3101022Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:44.3102112Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:44.3103405Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124378263950348, max_val=0.19141915440559387) 2025-09-09T14:20:44.3104006Z ) 2025-09-09T14:20:44.3104356Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8334845Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8336177Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:46.8336764Z ) 2025-09-09T14:20:46.8337084Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8338173Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0052]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8339388Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.3200514316558838) 2025-09-09T14:20:46.8339927Z ) 2025-09-09T14:20:46.8340109Z ) 2025-09-09T14:20:46.8340243Z 2025-09-09T14:20:46.8340248Z 2025-09-09T14:20:46.8340251Z 2025-09-09T14:20:46.8340343Z def forward(self, x): 2025-09-09T14:20:46.8340659Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:46.8341028Z conv_weight = self.conv.weight 2025-09-09T14:20:46.8341542Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:20:46.8342199Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:46.8343160Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:46.8344205Z relu = torch.ops.aten.relu.default(conv2d); conv2d = None 2025-09-09T14:20:46.8344754Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:20:46.8345391Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:46.8345856Z 2025-09-09T14:20:46.8346171Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:46.8346619Z model fx: GraphModule( 2025-09-09T14:20:46.8347074Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8348747Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8350244Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:46.8350929Z ) 2025-09-09T14:20:46.8351139Z (conv): ConvReLU2d( 2025-09-09T14:20:46.8351421Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:46.8351909Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8353081Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:46.8354802Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19124378263950348, max_val=0.19141915440559387) 2025-09-09T14:20:46.8355431Z ) 2025-09-09T14:20:46.8355684Z ) 2025-09-09T14:20:46.8355999Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8357358Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0052]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8358803Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.3200514316558838) 2025-09-09T14:20:46.8359394Z ) 2025-09-09T14:20:46.8359625Z ) 2025-09-09T14:20:46.8359734Z 2025-09-09T14:20:46.8359738Z 2025-09-09T14:20:46.8359747Z 2025-09-09T14:20:46.8359854Z def forward(self, x): 2025-09-09T14:20:46.8360323Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:46.8360999Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:46.8361670Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:46.8362259Z return activation_post_process_1 2025-09-09T14:20:46.8362625Z 2025-09-09T14:20:46.8362949Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:46.8363453Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:46.8363724Z [0., 0., 0.], 2025-09-09T14:20:46.8363984Z [0., 0., 0.]], 2025-09-09T14:20:46.8364206Z 2025-09-09T14:20:46.8364298Z [[0., 0., 0.], 2025-09-09T14:20:46.8364545Z [0., 0., 0.], 2025-09-09T14:20:46.8364777Z [0., 0., 0.]], 2025-09-09T14:20:46.8365014Z 2025-09-09T14:20:46.8365101Z [[0., 0., 0.], 2025-09-09T14:20:46.8365335Z [0., 0., 0.], 2025-09-09T14:20:46.8365613Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:46.8366054Z converted model pt2e: GraphModule( 2025-09-09T14:20:46.8366352Z (conv): Module() 2025-09-09T14:20:46.8366650Z ) 2025-09-09T14:20:46.8366758Z 2025-09-09T14:20:46.8366763Z 2025-09-09T14:20:46.8366768Z 2025-09-09T14:20:46.8366862Z def forward(self, x): 2025-09-09T14:20:46.8367201Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:46.8367721Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:20:46.8368951Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0015072374371811748, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:46.8370633Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:46.8372313Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:46.8374113Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_1, dequantize_per_tensor_default); dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:20:46.8375514Z relu = torch.ops.aten.relu.default(conv2d); conv2d = None 2025-09-09T14:20:46.8376490Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.005176672246307135, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:20:46.8378188Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.005176672246307135, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:20:46.8379545Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:20:46.8380180Z 2025-09-09T14:20:46.8380512Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:46.8381013Z onverted model fx: GraphModule( 2025-09-09T14:20:46.8381327Z (conv): ConvReLU2d( 2025-09-09T14:20:46.8381804Z (0): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False) 2025-09-09T14:20:46.8382357Z (1): ReLU() 2025-09-09T14:20:46.8382571Z ) 2025-09-09T14:20:46.8382767Z ) 2025-09-09T14:20:46.8382872Z 2025-09-09T14:20:46.8382878Z 2025-09-09T14:20:46.8382884Z 2025-09-09T14:20:46.8383050Z def forward(self, x): 2025-09-09T14:20:46.8383823Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:46.8385596Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:46.8386830Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:46.8387869Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.005176672246307135, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:20:46.8389390Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.005176672246307135, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:46.8390479Z return dequantize_per_tensor_default_1 2025-09-09T14:20:46.8390786Z 2025-09-09T14:20:46.8391083Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:46.8391499Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:46.8391764Z [0., 0., 0.], 2025-09-09T14:20:46.8391985Z [0., 0., 0.]], 2025-09-09T14:20:46.8392140Z 2025-09-09T14:20:46.8392234Z [[0., 0., 0.], 2025-09-09T14:20:46.8392452Z [0., 0., 0.], 2025-09-09T14:20:46.8392683Z [0., 0., 0.]], 2025-09-09T14:20:46.8392832Z 2025-09-09T14:20:46.8392913Z [[0., 0., 0.], 2025-09-09T14:20:46.8393143Z [0., 0., 0.], 2025-09-09T14:20:46.8393362Z [0., 0., 0.]]]]) 2025-09-09T14:20:46.8393624Z model pt2e: GraphModule( 2025-09-09T14:20:46.8393945Z (conv): Module() 2025-09-09T14:20:46.8394272Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8395551Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:46.8397221Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:20:46.8397996Z ) 2025-09-09T14:20:46.8398302Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8399409Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8400824Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:46.8401453Z ) 2025-09-09T14:20:46.8401770Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:46.8402882Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0084]), zero_point=tensor([-20], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:46.8404183Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.9077497124671936, max_val=1.2348304986953735) 2025-09-09T14:20:46.8404872Z ) 2025-09-09T14:20:46.8405054Z ) 2025-09-09T14:20:46.8405172Z 2025-09-09T14:20:46.8405177Z 2025-09-09T14:20:46.8405180Z 2025-09-09T14:20:46.8405275Z def forward(self, x): 2025-09-09T14:20:46.8405678Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:46.8406075Z conv_weight = self.conv.weight 2025-09-09T14:20:46.8406616Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:20:46.8407290Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:47.8382018Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:47.8383322Z activation_post_process_2 = self.activation_post_process_2(conv2d); conv2d = None 2025-09-09T14:20:47.8384163Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:47.8384768Z 2025-09-09T14:20:47.8385169Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:47.8385693Z model fx: GraphModule( 2025-09-09T14:20:47.8386150Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8387606Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:47.8389315Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:47.8390075Z ) 2025-09-09T14:20:47.8390261Z (conv): Conv2d( 2025-09-09T14:20:47.8390530Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:47.8390923Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8392035Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0014, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:20:47.8393551Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1897, -0.1787, -0.1913]), max_val=tensor([0.1870, 0.1478, 0.1740])) 2025-09-09T14:20:47.8394289Z ) 2025-09-09T14:20:47.8394493Z ) 2025-09-09T14:20:47.8394852Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8395941Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0084]), zero_point=tensor([-20], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:47.8397226Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.9077497124671936, max_val=1.2348304986953735) 2025-09-09T14:20:47.8397813Z ) 2025-09-09T14:20:47.8398002Z ) 2025-09-09T14:20:47.8398106Z 2025-09-09T14:20:47.8398110Z 2025-09-09T14:20:47.8398114Z 2025-09-09T14:20:47.8398205Z def forward(self, x): 2025-09-09T14:20:47.8398634Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:47.8399234Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:20:47.8400122Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:20:47.8400614Z return activation_post_process_1 2025-09-09T14:20:47.8400909Z 2025-09-09T14:20:47.8401208Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:47.8401629Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:47.8401885Z [0., 0., 0.], 2025-09-09T14:20:47.8402123Z [0., 0., 0.]], 2025-09-09T14:20:47.8402272Z 2025-09-09T14:20:47.8402353Z [[0., 0., 0.], 2025-09-09T14:20:47.8402583Z [0., 0., 0.], 2025-09-09T14:20:47.8402923Z [0., 0., 0.]], 2025-09-09T14:20:47.8403086Z 2025-09-09T14:20:47.8403168Z [[0., 0., 0.], 2025-09-09T14:20:47.8403392Z [0., 0., 0.], 2025-09-09T14:20:47.8412558Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:20:47.8412916Z converted model pt2e: GraphModule( 2025-09-09T14:20:47.8413206Z (conv): Module() 2025-09-09T14:20:47.8413423Z ) 2025-09-09T14:20:47.8413544Z 2025-09-09T14:20:47.8413548Z 2025-09-09T14:20:47.8413566Z 2025-09-09T14:20:47.8413657Z def forward(self, x): 2025-09-09T14:20:47.8413975Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:47.8414339Z _scale_0 = self._scale_0 2025-09-09T14:20:47.8414629Z _zero_point_0 = self._zero_point_0 2025-09-09T14:20:47.8414979Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:20:47.8416138Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:20:47.8417712Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:47.8419146Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:47.8420700Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel_default); dequantize_per_tensor_default = dequantize_per_channel_default = None 2025-09-09T14:20:47.8422070Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d, 0.00840227585285902, -20, -128, 127, torch.int8); conv2d = None 2025-09-09T14:20:47.8423555Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.00840227585285902, -20, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:47.8424736Z return pytree.tree_unflatten((dequantize_per_tensor_default_1,), self._out_spec) 2025-09-09T14:20:47.8425191Z 2025-09-09T14:20:47.8425500Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:47.8425926Z onverted model fx: GraphModule( 2025-09-09T14:20:47.8426386Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False) 2025-09-09T14:20:47.8426866Z ) 2025-09-09T14:20:47.8426969Z 2025-09-09T14:20:47.8426973Z 2025-09-09T14:20:47.8426977Z 2025-09-09T14:20:47.8427069Z def forward(self, x): 2025-09-09T14:20:47.8427784Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:20:47.8429218Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:20:47.8430381Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:20:47.8431358Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.00840227585285902, -20, -128, 127, torch.int8); conv = None 2025-09-09T14:20:47.8433065Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.00840227585285902, -20, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:20:47.8434088Z return dequantize_per_tensor_default_1 2025-09-09T14:20:47.8434397Z 2025-09-09T14:20:47.8434781Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:47.8435204Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:20:47.8435459Z [0., 0., 0.], 2025-09-09T14:20:47.8435811Z [0., 0., 0.]], 2025-09-09T14:20:47.8435965Z 2025-09-09T14:20:47.8436048Z [[0., 0., 0.], 2025-09-09T14:20:47.8436282Z [0., 0., 0.], 2025-09-09T14:20:47.8436504Z [0., 0., 0.]], 2025-09-09T14:20:47.8436668Z 2025-09-09T14:20:47.8436749Z [[0., 0., 0.], 2025-09-09T14:20:47.8436981Z [0., 0., 0.], 2025-09-09T14:20:47.8437202Z [0., 0., 0.]]]]) 2025-09-09T14:20:47.8437470Z model pt2e: GraphModule( 2025-09-09T14:20:47.8437711Z (conv): Module() 2025-09-09T14:20:47.8438046Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8439123Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:20:47.8440426Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127574563026428, max_val=0.18703685700893402) 2025-09-09T14:20:47.8441032Z ) 2025-09-09T14:20:47.8441326Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8442400Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:47.8443652Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:47.8444240Z ) 2025-09-09T14:20:47.8444544Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8445611Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0084]), zero_point=tensor([-20], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:47.8446876Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.9042347073554993, max_val=1.2348304986953735) 2025-09-09T14:20:47.8447460Z ) 2025-09-09T14:20:47.8447652Z ) 2025-09-09T14:20:47.8447752Z 2025-09-09T14:20:47.8447756Z 2025-09-09T14:20:47.8447760Z 2025-09-09T14:20:47.8447868Z def forward(self, x): 2025-09-09T14:20:47.8448170Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:20:47.8448553Z conv_weight = self.conv.weight 2025-09-09T14:20:47.8449060Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:20:47.8449730Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:20:47.8450645Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1); activation_post_process_0 = activation_post_process_1 = None 2025-09-09T14:20:47.8451604Z activation_post_process_2 = self.activation_post_process_2(conv2d); conv2d = None 2025-09-09T14:20:47.8452238Z return pytree.tree_unflatten((activation_post_process_2,), self._out_spec) 2025-09-09T14:20:47.8452671Z 2025-09-09T14:20:47.8452983Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:20:47.8453382Z model fx: GraphModule( 2025-09-09T14:20:47.8453737Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:20:47.8455473Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:20:47.8456762Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:20:47.8457345Z ) 2025-09-09T14:20:47.8457531Z (conv): Conv2d( 2025-09-09T14:20:47.8457809Z 3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False 2025-09-09T14:20:47.8458203Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4532918Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:21:31.4534629Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19127574563026428, max_val=0.18703685700893402) 2025-09-09T14:21:31.4535243Z ) 2025-09-09T14:21:31.4535429Z ) 2025-09-09T14:21:31.4535753Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4536845Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0084]), zero_point=tensor([-20], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:31.4538112Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.9042347073554993, max_val=1.2348304986953735) 2025-09-09T14:21:31.4538712Z ) 2025-09-09T14:21:31.4538892Z ) 2025-09-09T14:21:31.4539017Z 2025-09-09T14:21:31.4539021Z 2025-09-09T14:21:31.4539025Z 2025-09-09T14:21:31.4539118Z def forward(self, x): 2025-09-09T14:21:31.4539517Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:21:31.4540109Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:21:31.4540733Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:21:31.4541210Z return activation_post_process_1 2025-09-09T14:21:31.4541505Z 2025-09-09T14:21:31.4541803Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:31.4542222Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:31.4542489Z [0., 0., 0.], 2025-09-09T14:21:31.4542714Z [0., 0., 0.]], 2025-09-09T14:21:31.4542864Z 2025-09-09T14:21:31.4542962Z [[0., 0., 0.], 2025-09-09T14:21:31.4543182Z [0., 0., 0.], 2025-09-09T14:21:31.4543418Z [0., 0., 0.]], 2025-09-09T14:21:31.4543572Z 2025-09-09T14:21:31.4543653Z [[0., 0., 0.], 2025-09-09T14:21:31.4543887Z [0., 0., 0.], 2025-09-09T14:21:31.4544143Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:21:31.4544491Z converted model pt2e: GraphModule( 2025-09-09T14:21:31.4544785Z (conv): Module() 2025-09-09T14:21:31.4545003Z ) 2025-09-09T14:21:31.4545103Z 2025-09-09T14:21:31.4545108Z 2025-09-09T14:21:31.4545111Z 2025-09-09T14:21:31.4545200Z def forward(self, x): 2025-09-09T14:21:31.4545516Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:21:31.4545922Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:21:31.4546970Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.0015061082085594535, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:31.4548411Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:21:31.4549861Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:31.4551564Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_1, dequantize_per_tensor_default); dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:21:31.4552937Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d, 0.008388491347432137, -20, -128, 127, torch.int8); conv2d = None 2025-09-09T14:21:31.4554428Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.008388491347432137, -20, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:21:31.4555675Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:21:31.4556206Z 2025-09-09T14:21:31.4556518Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:31.4556940Z onverted model fx: GraphModule( 2025-09-09T14:21:31.4557393Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1), bias=False) 2025-09-09T14:21:31.4557863Z ) 2025-09-09T14:21:31.4557966Z 2025-09-09T14:21:31.4557970Z 2025-09-09T14:21:31.4557974Z 2025-09-09T14:21:31.4558070Z def forward(self, x): 2025-09-09T14:21:31.4558772Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:21:31.4560217Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:31.4561376Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:21:31.4562359Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.008388491347432137, -20, -128, 127, torch.int8); conv = None 2025-09-09T14:21:31.4563832Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.008388491347432137, -20, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:31.4564889Z return dequantize_per_tensor_default_1 2025-09-09T14:21:31.4565202Z 2025-09-09T14:21:31.4565506Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:31.4565923Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:31.4566176Z [0., 0., 0.], 2025-09-09T14:21:31.4566416Z [0., 0., 0.]], 2025-09-09T14:21:31.4566568Z 2025-09-09T14:21:31.4566662Z [[0., 0., 0.], 2025-09-09T14:21:31.4566878Z [0., 0., 0.], 2025-09-09T14:21:31.4567112Z [0., 0., 0.]], 2025-09-09T14:21:31.4567259Z 2025-09-09T14:21:31.4567338Z [[0., 0., 0.], 2025-09-09T14:21:31.4567566Z [0., 0., 0.], 2025-09-09T14:21:31.4567786Z [0., 0., 0.]]]]) 2025-09-09T14:21:31.4568230Z PASSED 2025-09-09T14:21:31.4568964Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_transpose_bn PASSED 2025-09-09T14:21:31.4570166Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_transpose_bn_relu PASSED 2025-09-09T14:21:31.4571272Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_inplace_add_relu model pt2e: GraphModule( 2025-09-09T14:21:31.4571961Z (conv): Module() 2025-09-09T14:21:31.4572296Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4573394Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0002]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:21:31.4574769Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.0532]), max_val=tensor([-0.0532])) 2025-09-09T14:21:31.4575419Z ) 2025-09-09T14:21:31.4575713Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4576884Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0111]), zero_point=tensor([38], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:31.4578149Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.8401848077774048, max_val=0.9828221797943115) 2025-09-09T14:21:31.4578738Z ) 2025-09-09T14:21:31.4579043Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4580105Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:31.4581450Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.3858921527862549, max_val=0.5359839200973511) 2025-09-09T14:21:31.4582018Z ) 2025-09-09T14:21:31.4582318Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4583401Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0054]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:31.4584612Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.372033953666687) 2025-09-09T14:21:31.4585143Z ) 2025-09-09T14:21:31.4585318Z ) 2025-09-09T14:21:31.4585432Z 2025-09-09T14:21:31.4585436Z 2025-09-09T14:21:31.4585439Z 2025-09-09T14:21:31.4585534Z def forward(self, x): 2025-09-09T14:21:31.4585854Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:21:31.4586223Z conv_weight = self.conv.weight 2025-09-09T14:21:31.4586743Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:21:31.4587266Z conv_bias = self.conv.bias 2025-09-09T14:21:31.4587685Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:21:31.4588571Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, conv_bias); activation_post_process_1 = conv_bias = None 2025-09-09T14:21:31.4589498Z activation_post_process_2 = self.activation_post_process_2(conv2d); conv2d = None 2025-09-09T14:21:31.4590417Z add_ = torch.ops.aten.add_.Tensor(activation_post_process_2, activation_post_process_0); activation_post_process_2 = activation_post_process_0 = None 2025-09-09T14:21:31.4591218Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:21:31.4591760Z activation_post_process_3 = self.activation_post_process_3(relu_); relu_ = None 2025-09-09T14:21:31.4592369Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:21:31.4592808Z 2025-09-09T14:21:31.4593105Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:31.4593514Z model fx: GraphModule( 2025-09-09T14:21:31.4593871Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4595028Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0111]), zero_point=tensor([38], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:31.4596324Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.8401848077774048, max_val=0.9828221797943115) 2025-09-09T14:21:31.4596907Z ) 2025-09-09T14:21:31.4597110Z (conv): Conv2d( 2025-09-09T14:21:31.4597376Z 1, 1, kernel_size=(1, 1), stride=(1, 1) 2025-09-09T14:21:31.4597747Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:31.4598821Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0002]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:21:32.4619090Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.0532]), max_val=tensor([-0.0532])) 2025-09-09T14:21:32.4619809Z ) 2025-09-09T14:21:32.4619998Z ) 2025-09-09T14:21:32.4620311Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4621437Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:32.4622943Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.3858921527862549, max_val=0.5359839200973511) 2025-09-09T14:21:32.4623535Z ) 2025-09-09T14:21:32.4623735Z (relu): ReLU(inplace=True) 2025-09-09T14:21:32.4624121Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4625223Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0054]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:32.4626442Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.372033953666687) 2025-09-09T14:21:32.4626971Z ) 2025-09-09T14:21:32.4627152Z ) 2025-09-09T14:21:32.4627267Z 2025-09-09T14:21:32.4627272Z 2025-09-09T14:21:32.4627276Z 2025-09-09T14:21:32.4627367Z def forward(self, x): 2025-09-09T14:21:32.4627762Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:21:32.4628240Z conv = self.conv(activation_post_process_0) 2025-09-09T14:21:32.4628732Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:21:32.4629509Z add = activation_post_process_1 + activation_post_process_0; activation_post_process_1 = activation_post_process_0 = None 2025-09-09T14:21:32.4630161Z relu = self.relu(add); add = None 2025-09-09T14:21:32.4630617Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:21:32.4631101Z return activation_post_process_2 2025-09-09T14:21:32.4631388Z 2025-09-09T14:21:32.4631682Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:32.4632093Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:32.4632346Z [0., 0., 0.], 2025-09-09T14:21:32.4632610Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:21:32.4632939Z converted model pt2e: GraphModule( 2025-09-09T14:21:32.4633228Z (conv): Module() 2025-09-09T14:21:32.4633436Z ) 2025-09-09T14:21:32.4633550Z 2025-09-09T14:21:32.4633554Z 2025-09-09T14:21:32.4633558Z 2025-09-09T14:21:32.4633648Z def forward(self, x): 2025-09-09T14:21:32.4633958Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:21:32.4634312Z _scale_0 = self._scale_0 2025-09-09T14:21:32.4634596Z _zero_point_0 = self._zero_point_0 2025-09-09T14:21:32.4635028Z quantize_per_channel_default = self._frozen_param0 2025-09-09T14:21:32.4636188Z dequantize_per_channel_default = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel_default, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel_default = _scale_0 = _zero_point_0 = None 2025-09-09T14:21:32.4637288Z conv_bias = self.conv.bias 2025-09-09T14:21:32.4638000Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.011070616543293, 38, -128, 127, torch.int8); x = None 2025-09-09T14:21:32.4639281Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.011070616543293, 38, -128, 127, torch.int8) 2025-09-09T14:21:32.4640785Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.011070616543293, 38, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:32.4642507Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_3, dequantize_per_channel_default, conv_bias); dequantize_per_tensor_default_3 = dequantize_per_channel_default = conv_bias = None 2025-09-09T14:21:32.4643970Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d, 0.0021018977276980877, -128, -128, 127, torch.int8); conv2d = None 2025-09-09T14:21:32.4645482Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0021018977276980877, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:32.4647089Z add_ = torch.ops.aten.add_.Tensor(dequantize_per_tensor_default_1, dequantize_per_tensor_default_4); dequantize_per_tensor_default_1 = dequantize_per_tensor_default_4 = None 2025-09-09T14:21:32.4647996Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:21:32.4648858Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu_, 0.005380525253713131, -128, -128, 127, torch.int8); relu_ = None 2025-09-09T14:21:32.4650357Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.005380525253713131, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:21:32.4651542Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:21:32.4652000Z 2025-09-09T14:21:32.4652310Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:32.4652726Z onverted model fx: GraphModule( 2025-09-09T14:21:32.4653154Z (conv): QuantizedConv2d(Reference)(1, 1, kernel_size=(1, 1), stride=(1, 1)) 2025-09-09T14:21:32.4653595Z (relu): ReLU(inplace=True) 2025-09-09T14:21:32.4653856Z ) 2025-09-09T14:21:32.4653963Z 2025-09-09T14:21:32.4653967Z 2025-09-09T14:21:32.4653971Z 2025-09-09T14:21:32.4654077Z def forward(self, x): 2025-09-09T14:21:32.4654761Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.011070616543293, 38, -128, 127, torch.int8); x = None 2025-09-09T14:21:32.4656168Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.011070616543293, 38, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:32.4657165Z conv = self.conv(dequantize_per_tensor_default) 2025-09-09T14:21:32.4658013Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0021018977276980877, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:21:32.4659530Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0021018977276980877, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:32.4660939Z add = dequantize_per_tensor_default_1 + dequantize_per_tensor_default; dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:21:32.4661666Z relu = self.relu(add); add = None 2025-09-09T14:21:32.4662465Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.005380525253713131, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:21:32.4663952Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.005380525253713131, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:21:32.4664982Z return dequantize_per_tensor_default_2 2025-09-09T14:21:32.4665273Z 2025-09-09T14:21:32.4665584Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:32.4665986Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:32.4666253Z [0., 0., 0.], 2025-09-09T14:21:32.4666488Z [0., 0., 0.]]]]) 2025-09-09T14:21:32.4666735Z model pt2e: GraphModule( 2025-09-09T14:21:32.4666990Z (conv): Module() 2025-09-09T14:21:32.4667384Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4668489Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0002]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:21:32.4669782Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.05316734313964844, max_val=-0.05316734313964844) 2025-09-09T14:21:32.4670386Z ) 2025-09-09T14:21:32.4670750Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4671818Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0111]), zero_point=tensor([38], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:32.4673098Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.8401848077774048, max_val=0.9828221797943115) 2025-09-09T14:21:32.4673683Z ) 2025-09-09T14:21:32.4673991Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4675150Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:32.4676418Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.3858921527862549, max_val=0.5359839200973511) 2025-09-09T14:21:32.4677008Z ) 2025-09-09T14:21:32.4677300Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:32.4678381Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0054]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:32.4679607Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.372033953666687) 2025-09-09T14:21:32.4680130Z ) 2025-09-09T14:21:32.4680320Z ) 2025-09-09T14:21:32.4680427Z 2025-09-09T14:21:32.4680431Z 2025-09-09T14:21:32.4680435Z 2025-09-09T14:21:32.4680525Z def forward(self, x): 2025-09-09T14:21:32.4680842Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:21:32.4681222Z conv_weight = self.conv.weight 2025-09-09T14:21:32.4681729Z activation_post_process_1 = self.activation_post_process_1(conv_weight); conv_weight = None 2025-09-09T14:21:32.4682265Z conv_bias = self.conv.bias 2025-09-09T14:21:32.4682674Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:21:32.4683584Z conv2d = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, conv_bias); activation_post_process_1 = conv_bias = None 2025-09-09T14:21:32.4684501Z activation_post_process_2 = self.activation_post_process_2(conv2d); conv2d = None 2025-09-09T14:21:57.4301943Z add_ = torch.ops.aten.add_.Tensor(activation_post_process_2, activation_post_process_0); activation_post_process_2 = activation_post_process_0 = None 2025-09-09T14:21:57.4303434Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:21:57.4304221Z activation_post_process_3 = self.activation_post_process_3(relu_); relu_ = None 2025-09-09T14:21:57.4305062Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:21:57.4305650Z 2025-09-09T14:21:57.4306057Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:57.4306615Z model fx: GraphModule( 2025-09-09T14:21:57.4307078Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:57.4308388Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0111]), zero_point=tensor([38], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:57.4310172Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.8401848077774048, max_val=0.9828221797943115) 2025-09-09T14:21:57.4310806Z ) 2025-09-09T14:21:57.4310997Z (conv): Conv2d( 2025-09-09T14:21:57.4311257Z 1, 1, kernel_size=(1, 1), stride=(1, 1) 2025-09-09T14:21:57.4311644Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:57.4312706Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0002]), zero_point=tensor([127], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:21:57.4314157Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.05316734313964844, max_val=-0.05316734313964844) 2025-09-09T14:21:57.4314832Z ) 2025-09-09T14:21:57.4315038Z ) 2025-09-09T14:21:57.4315338Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:57.4316445Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0021]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:57.4317762Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.3858921527862549, max_val=0.5359839200973511) 2025-09-09T14:21:57.4318337Z ) 2025-09-09T14:21:57.4318550Z (relu): ReLU(inplace=True) 2025-09-09T14:21:57.4318929Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:57.4320033Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0054]), zero_point=tensor([-128], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:57.4321299Z (activation_post_process): MovingAverageMinMaxObserver(min_val=0.0, max_val=1.372033953666687) 2025-09-09T14:21:57.4321843Z ) 2025-09-09T14:21:57.4322041Z ) 2025-09-09T14:21:57.4322151Z 2025-09-09T14:21:57.4322156Z 2025-09-09T14:21:57.4322164Z 2025-09-09T14:21:57.4322261Z def forward(self, x): 2025-09-09T14:21:57.4322662Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:21:57.4323158Z conv = self.conv(activation_post_process_0) 2025-09-09T14:21:57.4323646Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:21:57.4324447Z add = activation_post_process_1 + activation_post_process_0; activation_post_process_1 = activation_post_process_0 = None 2025-09-09T14:21:57.4325102Z relu = self.relu(add); add = None 2025-09-09T14:21:57.4325570Z activation_post_process_2 = self.activation_post_process_2(relu); relu = None 2025-09-09T14:21:57.4326043Z return activation_post_process_2 2025-09-09T14:21:57.4326340Z 2025-09-09T14:21:57.4326633Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:57.4327059Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:57.4327331Z [0., 0., 0.], 2025-09-09T14:21:57.4327596Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:21:57.4327946Z converted model pt2e: GraphModule( 2025-09-09T14:21:57.4328231Z (conv): Module() 2025-09-09T14:21:57.4328451Z ) 2025-09-09T14:21:57.4328553Z 2025-09-09T14:21:57.4328558Z 2025-09-09T14:21:57.4328562Z 2025-09-09T14:21:57.4328654Z def forward(self, x): 2025-09-09T14:21:57.4328977Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:21:57.4329406Z quantize_per_tensor_default = self._frozen_param0 2025-09-09T14:21:57.4330467Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.00041864049853757024, 0, -127, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:57.4331486Z conv_bias = self.conv.bias 2025-09-09T14:21:57.4332199Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.011070616543293, 38, -128, 127, torch.int8); x = None 2025-09-09T14:21:57.4333631Z dequantize_per_tensor_default_5 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.011070616543293, 38, -128, 127, torch.int8) 2025-09-09T14:21:57.4335182Z dequantize_per_tensor_default_4 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.011070616543293, 38, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:57.4336799Z conv2d = torch.ops.aten.conv2d.default(dequantize_per_tensor_default_4, dequantize_per_tensor_default, conv_bias); dequantize_per_tensor_default_4 = dequantize_per_tensor_default = conv_bias = None 2025-09-09T14:21:57.4338361Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d, 0.0021018977276980877, -128, -128, 127, torch.int8); conv2d = None 2025-09-09T14:21:57.4339887Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.0021018977276980877, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:21:57.4341422Z add_ = torch.ops.aten.add_.Tensor(dequantize_per_tensor_default_2, dequantize_per_tensor_default_5); dequantize_per_tensor_default_2 = dequantize_per_tensor_default_5 = None 2025-09-09T14:21:57.4342330Z relu_ = torch.ops.aten.relu_.default(add_); add_ = None 2025-09-09T14:21:57.4343200Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu_, 0.005380525253713131, -128, -128, 127, torch.int8); relu_ = None 2025-09-09T14:21:57.4344695Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.005380525253713131, -128, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:21:57.4345885Z return pytree.tree_unflatten((dequantize_per_tensor_default_3,), self._out_spec) 2025-09-09T14:21:57.4346344Z 2025-09-09T14:21:57.4346655Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:57.4347080Z onverted model fx: GraphModule( 2025-09-09T14:21:57.4347501Z (conv): QuantizedConv2d(Reference)(1, 1, kernel_size=(1, 1), stride=(1, 1)) 2025-09-09T14:21:57.4347960Z (relu): ReLU(inplace=True) 2025-09-09T14:21:57.4348210Z ) 2025-09-09T14:21:57.4348312Z 2025-09-09T14:21:57.4348328Z 2025-09-09T14:21:57.4348332Z 2025-09-09T14:21:57.4348422Z def forward(self, x): 2025-09-09T14:21:57.4349099Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.011070616543293, 38, -128, 127, torch.int8); x = None 2025-09-09T14:21:57.4350515Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.011070616543293, 38, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:21:57.4351533Z conv = self.conv(dequantize_per_tensor_default) 2025-09-09T14:21:57.4352368Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.0021018977276980877, -128, -128, 127, torch.int8); conv = None 2025-09-09T14:21:57.4353879Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.0021018977276980877, -128, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:21:57.4355379Z add = dequantize_per_tensor_default_1 + dequantize_per_tensor_default; dequantize_per_tensor_default_1 = dequantize_per_tensor_default = None 2025-09-09T14:21:57.4356104Z relu = self.relu(add); add = None 2025-09-09T14:21:57.4356907Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(relu, 0.005380525253713131, -128, -128, 127, torch.int8); relu = None 2025-09-09T14:21:57.4358486Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.005380525253713131, -128, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:21:57.4359533Z return dequantize_per_tensor_default_2 2025-09-09T14:21:57.4359848Z 2025-09-09T14:21:57.4360147Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:21:57.4360573Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:21:57.4360831Z [0., 0., 0.], 2025-09-09T14:21:57.4361065Z [0., 0., 0.]]]]) 2025-09-09T14:21:57.4361558Z PASSED 2025-09-09T14:21:57.4362377Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_per_channel_weight_custom_dtype PASSED 2025-09-09T14:21:57.4363715Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_preserve_source_fn_stack PASSED 2025-09-09T14:21:57.4364832Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_update_shared_qspec model pt2e: GraphModule( 2025-09-09T14:21:57.4365556Z (conv): Module() 2025-09-09T14:21:57.4365777Z (bn): Module() 2025-09-09T14:21:57.4366118Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:21:57.4367190Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:21:57.4368474Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:21:57.4369074Z ) 2025-09-09T14:21:57.4369370Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4775752Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:22:06.4777913Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1919, -0.1859, -0.1499]), max_val=tensor([0.1902, 0.1880, 0.1882])) 2025-09-09T14:22:06.4778921Z ) 2025-09-09T14:22:06.4779309Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4780754Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:06.4782447Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5212559700012207, max_val=2.179866313934326) 2025-09-09T14:22:06.4783249Z ) 2025-09-09T14:22:06.4783646Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4785248Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:06.4786955Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5212559700012207, max_val=2.179866313934326) 2025-09-09T14:22:06.4787729Z ) 2025-09-09T14:22:06.4787981Z ) 2025-09-09T14:22:06.4788120Z 2025-09-09T14:22:06.4788126Z 2025-09-09T14:22:06.4788131Z 2025-09-09T14:22:06.4788268Z def forward(self, x): 2025-09-09T14:22:06.4788664Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:22:06.4789165Z conv_weight = self.conv.weight 2025-09-09T14:22:06.4789556Z conv_bias = self.conv.bias 2025-09-09T14:22:06.4789927Z bn_weight = self.bn.weight 2025-09-09T14:22:06.4790273Z bn_bias = self.bn.bias 2025-09-09T14:22:06.4790645Z bn_running_mean = self.bn.running_mean 2025-09-09T14:22:06.4791079Z bn_running_var = self.bn.running_var 2025-09-09T14:22:06.4791566Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:22:06.4792228Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:22:06.4794210Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:22:06.4795105Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:22:06.4795663Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:22:06.4796266Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:22:06.4796898Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:22:06.4797651Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:22:06.4798635Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:22:06.4799542Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:22:06.4801048Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:22:06.4802393Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:22:06.4803196Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:22:06.4804067Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:22:06.4804893Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:22:06.4806227Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:22:06.4807641Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:22:06.4808499Z hardtanh = torch.ops.aten.hardtanh.default(activation_post_process_2, -1.0, 1.0); activation_post_process_2 = None 2025-09-09T14:22:06.4809320Z activation_post_process_3 = self.activation_post_process_3(hardtanh); hardtanh = None 2025-09-09T14:22:06.4810156Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:22:06.4810606Z 2025-09-09T14:22:06.4810906Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:22:06.4811321Z model fx: GraphModule( 2025-09-09T14:22:06.4811664Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4812752Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:06.4814032Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:22:06.4814607Z ) 2025-09-09T14:22:06.4814810Z (conv): ConvBn2d( 2025-09-09T14:22:06.4815058Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:22:06.4815528Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:22:06.4816048Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4817168Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015, 0.0015, 0.0015]), zero_point=tensor([0, 0, 0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_channel_symmetric, reduce_range=False 2025-09-09T14:22:06.4818679Z (activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([-0.1919, -0.1859, -0.1499]), max_val=tensor([0.1902, 0.1880, 0.1882])) 2025-09-09T14:22:06.4819418Z ) 2025-09-09T14:22:06.4819614Z ) 2025-09-09T14:22:06.4819904Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4821146Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:06.4822434Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5212559700012207, max_val=2.179866313934326) 2025-09-09T14:22:06.4823012Z ) 2025-09-09T14:22:06.4823261Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:22:06.4823692Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:06.4824787Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:06.4826167Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.5212559700012207, max_val=2.179866313934326) 2025-09-09T14:22:06.4826738Z ) 2025-09-09T14:22:06.4826932Z ) 2025-09-09T14:22:06.4827036Z 2025-09-09T14:22:06.4827041Z 2025-09-09T14:22:06.4827049Z 2025-09-09T14:22:06.4827142Z def forward(self, x): 2025-09-09T14:22:06.4827539Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:22:06.4828142Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:22:06.4828750Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:22:06.4829413Z hardtanh = self.hardtanh(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:22:06.4830102Z activation_post_process_2 = self.activation_post_process_2(hardtanh); hardtanh = None 2025-09-09T14:22:06.4830625Z return activation_post_process_2 2025-09-09T14:22:06.4830905Z 2025-09-09T14:22:06.4831213Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:22:06.4831638Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:22:06.4831888Z [0., 0., 0.], 2025-09-09T14:22:06.4832125Z [0., 0., 0.]], 2025-09-09T14:22:06.4832275Z 2025-09-09T14:22:06.4832362Z [[0., 0., 0.], 2025-09-09T14:22:06.4832596Z [0., 0., 0.], 2025-09-09T14:22:06.4832813Z [0., 0., 0.]], 2025-09-09T14:22:06.4833012Z 2025-09-09T14:22:06.4833092Z [[0., 0., 0.], 2025-09-09T14:22:06.4833325Z [0., 0., 0.], 2025-09-09T14:22:06.4833575Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:22:06.4833925Z converted model pt2e: GraphModule( 2025-09-09T14:22:06.4834206Z (conv): Module() 2025-09-09T14:22:06.4834435Z (bn): Module() 2025-09-09T14:22:06.4834724Z ) 2025-09-09T14:22:06.4834842Z 2025-09-09T14:22:06.4834846Z 2025-09-09T14:22:06.4834850Z 2025-09-09T14:22:06.4834942Z def forward(self, x): 2025-09-09T14:22:06.4835244Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:22:06.4835620Z conv_bias = self.conv.bias 2025-09-09T14:22:06.4836365Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:22:06.4837802Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:22:06.4838797Z _scale_0 = self._scale_0 2025-09-09T14:22:06.4839072Z _zero_point_0 = self._zero_point_0 2025-09-09T14:22:06.4839412Z quantize_per_channel = self._frozen_param0 2025-09-09T14:22:06.4840441Z dequantize_per_channel = torch.ops.quantized_decomposed.dequantize_per_channel.default(quantize_per_channel, _scale_0, _zero_point_0, 0, -127, 127, torch.int8); quantize_per_channel = _scale_0 = _zero_point_0 = None 2025-09-09T14:22:06.4842016Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_channel, conv_bias); dequantize_per_tensor_default = dequantize_per_channel = conv_bias = None 2025-09-09T14:22:06.4843523Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.014514205045998096, -23, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:22:06.4845046Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.014514205045998096, -23, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:22:06.4846404Z hardtanh = torch.ops.aten.hardtanh.default(dequantize_per_tensor_default_1, -1.0, 1.0); dequantize_per_tensor_default_1 = None 2025-09-09T14:22:06.4847666Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.014514205045998096, -23, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:22:08.2533146Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.014514205045998096, -23, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:22:08.2534584Z return pytree.tree_unflatten((dequantize_per_tensor_default_2,), self._out_spec) 2025-09-09T14:22:08.2535060Z 2025-09-09T14:22:08.2535391Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:22:08.2535814Z onverted model fx: GraphModule( 2025-09-09T14:22:08.2536261Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:22:08.2536751Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:22:08.2537094Z ) 2025-09-09T14:22:08.2537201Z 2025-09-09T14:22:08.2537206Z 2025-09-09T14:22:08.2537227Z 2025-09-09T14:22:08.2537336Z def forward(self, x): 2025-09-09T14:22:08.2538163Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:22:08.2539608Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:22:08.2540949Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:22:08.2541938Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.014514205045998096, -23, -128, 127, torch.int8); conv = None 2025-09-09T14:22:08.2543438Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.014514205045998096, -23, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:22:08.2544671Z hardtanh = self.hardtanh(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:22:08.2545745Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.014514205045998096, -23, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:22:08.2547278Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.014514205045998096, -23, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:22:08.2548285Z return dequantize_per_tensor_default_2 2025-09-09T14:22:08.2548603Z 2025-09-09T14:22:08.2548900Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:22:08.2549323Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:22:08.2549578Z [0., 0., 0.], 2025-09-09T14:22:08.2549817Z [0., 0., 0.]], 2025-09-09T14:22:08.2549965Z 2025-09-09T14:22:08.2550065Z [[0., 0., 0.], 2025-09-09T14:22:08.2550295Z [0., 0., 0.], 2025-09-09T14:22:08.2550530Z [0., 0., 0.]], 2025-09-09T14:22:08.2550679Z 2025-09-09T14:22:08.2550759Z [[0., 0., 0.], 2025-09-09T14:22:08.2550987Z [0., 0., 0.], 2025-09-09T14:22:08.2551207Z [0., 0., 0.]]]]) 2025-09-09T14:22:08.2551468Z model pt2e: GraphModule( 2025-09-09T14:22:08.2551764Z (conv): Module() 2025-09-09T14:22:08.2551975Z (bn): Module() 2025-09-09T14:22:08.2552724Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2553803Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2555365Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:22:08.2555954Z ) 2025-09-09T14:22:08.2556460Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2557559Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.int8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:22:08.2558836Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19193142652511597, max_val=0.1902383267879486) 2025-09-09T14:22:08.2559436Z ) 2025-09-09T14:22:08.2559742Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2560805Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2562063Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.521487832069397, max_val=2.1819007396698) 2025-09-09T14:22:08.2562639Z ) 2025-09-09T14:22:08.2562950Z (activation_post_process_3): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2564031Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.int8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2565282Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.521487832069397, max_val=2.1819007396698) 2025-09-09T14:22:08.2565861Z ) 2025-09-09T14:22:08.2566036Z ) 2025-09-09T14:22:08.2566151Z 2025-09-09T14:22:08.2566155Z 2025-09-09T14:22:08.2566159Z 2025-09-09T14:22:08.2566248Z def forward(self, x): 2025-09-09T14:22:08.2566561Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:22:08.2566926Z conv_weight = self.conv.weight 2025-09-09T14:22:08.2567232Z conv_bias = self.conv.bias 2025-09-09T14:22:08.2567503Z bn_weight = self.bn.weight 2025-09-09T14:22:08.2567785Z bn_bias = self.bn.bias 2025-09-09T14:22:08.2568055Z bn_running_mean = self.bn.running_mean 2025-09-09T14:22:08.2568391Z bn_running_var = self.bn.running_var 2025-09-09T14:22:08.2568749Z bn_num_batches_tracked = self.bn.num_batches_tracked 2025-09-09T14:22:08.2569247Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:22:08.2569917Z add_ = torch.ops.aten.add_.Tensor(bn_num_batches_tracked, 1); bn_num_batches_tracked = add_ = None 2025-09-09T14:22:08.2570503Z add = torch.ops.aten.add.Tensor(bn_running_var, 1e-05) 2025-09-09T14:22:08.2570942Z sqrt = torch.ops.aten.sqrt.default(add); add = None 2025-09-09T14:22:08.2571390Z div = torch.ops.aten.div.Tensor(bn_weight, sqrt); sqrt = None 2025-09-09T14:22:08.2571883Z reshape = torch.ops.aten.reshape.default(div, [-1, 1, 1, 1]) 2025-09-09T14:22:08.2572445Z mul = torch.ops.aten.mul.Tensor(conv_weight, reshape); conv_weight = reshape = None 2025-09-09T14:22:08.2573089Z activation_post_process_1 = self.activation_post_process_1(mul); mul = None 2025-09-09T14:22:08.2573784Z zeros_like = torch.ops.aten.zeros_like.default(conv_bias, dtype = torch.float32, pin_memory = False) 2025-09-09T14:22:08.2574894Z conv2d_1 = torch.ops.aten.conv2d.default(activation_post_process_0, activation_post_process_1, zeros_like); activation_post_process_0 = activation_post_process_1 = zeros_like = None 2025-09-09T14:22:08.2576006Z reshape_1 = torch.ops.aten.reshape.default(div, [1, -1, 1, 1]); div = None 2025-09-09T14:22:08.2576614Z div_1 = torch.ops.aten.div.Tensor(conv2d_1, reshape_1); conv2d_1 = reshape_1 = None 2025-09-09T14:22:08.2577299Z reshape_2 = torch.ops.aten.reshape.default(conv_bias, [1, -1, 1, 1]); conv_bias = None 2025-09-09T14:22:08.2577936Z add_1 = torch.ops.aten.add.Tensor(div_1, reshape_2); div_1 = reshape_2 = None 2025-09-09T14:22:08.2578948Z batch_norm_1 = torch.ops.aten.batch_norm.default(add_1, bn_weight, bn_bias, bn_running_mean, bn_running_var, True, 0.1, 1e-05, True); add_1 = bn_weight = bn_bias = bn_running_mean = bn_running_var = None 2025-09-09T14:22:08.2580101Z activation_post_process_2 = self.activation_post_process_2(batch_norm_1); batch_norm_1 = None 2025-09-09T14:22:08.2580949Z hardtanh = torch.ops.aten.hardtanh.default(activation_post_process_2, -1.0, 1.0); activation_post_process_2 = None 2025-09-09T14:22:08.2581779Z activation_post_process_3 = self.activation_post_process_3(hardtanh); hardtanh = None 2025-09-09T14:22:08.2582420Z return pytree.tree_unflatten((activation_post_process_3,), self._out_spec) 2025-09-09T14:22:08.2582868Z 2025-09-09T14:22:08.2583173Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:22:08.2583592Z model fx: GraphModule( 2025-09-09T14:22:08.2583955Z (activation_post_process_0): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2585027Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0183]), zero_point=tensor([10], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2586310Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-2.526270866394043, max_val=2.143237352371216) 2025-09-09T14:22:08.2586886Z ) 2025-09-09T14:22:08.2587093Z (conv): ConvBn2d( 2025-09-09T14:22:08.2587333Z 3, 3, kernel_size=(3, 3), stride=(1, 1) 2025-09-09T14:22:08.2587801Z (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) 2025-09-09T14:22:08.2588324Z (weight_fake_quant): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2589383Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0015]), zero_point=tensor([0], dtype=torch.int32), dtype=torch.qint8, quant_min=-127, quant_max=127, qscheme=torch.per_tensor_symmetric, reduce_range=False 2025-09-09T14:22:08.2590685Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-0.19193142652511597, max_val=0.1902383267879486) 2025-09-09T14:22:08.2591272Z ) 2025-09-09T14:22:08.2591468Z ) 2025-09-09T14:22:08.2591773Z (activation_post_process_1): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2592841Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2594113Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.521487832069397, max_val=2.1819007396698) 2025-09-09T14:22:08.2594775Z ) 2025-09-09T14:22:08.2595019Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:22:08.2595448Z (activation_post_process_2): FusedMovingAvgObsFakeQuantize( 2025-09-09T14:22:08.2596529Z fake_quant_enabled=tensor([1]), observer_enabled=tensor([1]), scale=tensor([0.0145]), zero_point=tensor([-23], dtype=torch.int32), dtype=torch.qint8, quant_min=-128, quant_max=127, qscheme=torch.per_tensor_affine, reduce_range=False 2025-09-09T14:22:08.2597801Z (activation_post_process): MovingAverageMinMaxObserver(min_val=-1.521487832069397, max_val=2.1819007396698) 2025-09-09T14:22:08.2598370Z ) 2025-09-09T14:22:08.2598560Z ) 2025-09-09T14:22:08.2598661Z 2025-09-09T14:22:08.2598665Z 2025-09-09T14:22:08.2598669Z 2025-09-09T14:22:08.2598760Z def forward(self, x): 2025-09-09T14:23:15.4786256Z activation_post_process_0 = self.activation_post_process_0(x); x = None 2025-09-09T14:23:15.4787470Z conv = self.conv(activation_post_process_0); activation_post_process_0 = None 2025-09-09T14:23:15.4788290Z activation_post_process_1 = self.activation_post_process_1(conv); conv = None 2025-09-09T14:23:15.4789173Z hardtanh = self.hardtanh(activation_post_process_1); activation_post_process_1 = None 2025-09-09T14:23:15.4790123Z activation_post_process_2 = self.activation_post_process_2(hardtanh); hardtanh = None 2025-09-09T14:23:15.4790796Z return activation_post_process_2 2025-09-09T14:23:15.4791297Z 2025-09-09T14:23:15.4791707Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:23:15.4792238Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:23:15.4792580Z [0., 0., 0.], 2025-09-09T14:23:15.4792868Z [0., 0., 0.]], 2025-09-09T14:23:15.4793079Z 2025-09-09T14:23:15.4793185Z [[0., 0., 0.], 2025-09-09T14:23:15.4793470Z [0., 0., 0.], 2025-09-09T14:23:15.4793768Z [0., 0., 0.]], 2025-09-09T14:23:15.4793966Z 2025-09-09T14:23:15.4794070Z [[0., 0., 0.], 2025-09-09T14:23:15.4794364Z [0., 0., 0.], 2025-09-09T14:23:15.4794775Z [0., 0., 0.]]]], grad_fn=) 2025-09-09T14:23:15.4795208Z converted model pt2e: GraphModule( 2025-09-09T14:23:15.4795589Z (conv): Module() 2025-09-09T14:23:15.4795868Z (bn): Module() 2025-09-09T14:23:15.4796152Z ) 2025-09-09T14:23:15.4796285Z 2025-09-09T14:23:15.4796290Z 2025-09-09T14:23:15.4796295Z 2025-09-09T14:23:15.4796411Z def forward(self, x): 2025-09-09T14:23:15.4796821Z x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec) 2025-09-09T14:23:15.4797293Z conv_bias = self.conv.bias 2025-09-09T14:23:15.4798278Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:23:15.4800217Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:23:15.4801573Z quantize_per_tensor = self._frozen_param0 2025-09-09T14:23:15.4802789Z dequantize_per_tensor = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor, 0.0015112711116671562, 0, -127, 127, torch.int8); quantize_per_tensor = None 2025-09-09T14:23:15.4804752Z conv2d_2 = torch.ops.aten.conv2d.default(dequantize_per_tensor_default, dequantize_per_tensor, conv_bias); dequantize_per_tensor_default = dequantize_per_tensor = conv_bias = None 2025-09-09T14:23:15.4806605Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv2d_2, 0.014523092657327652, -23, -128, 127, torch.int8); conv2d_2 = None 2025-09-09T14:23:15.4808634Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.014523092657327652, -23, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:23:15.4810628Z hardtanh = torch.ops.aten.hardtanh.default(dequantize_per_tensor_default_2, -1.0, 1.0); dequantize_per_tensor_default_2 = None 2025-09-09T14:23:15.4812211Z quantize_per_tensor_default_3 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.014523092657327652, -23, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:23:15.4814261Z dequantize_per_tensor_default_3 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_3, 0.014523092657327652, -23, -128, 127, torch.int8); quantize_per_tensor_default_3 = None 2025-09-09T14:23:15.4815577Z return pytree.tree_unflatten((dequantize_per_tensor_default_3,), self._out_spec) 2025-09-09T14:23:15.4816056Z 2025-09-09T14:23:15.4816376Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:23:15.4816792Z onverted model fx: GraphModule( 2025-09-09T14:23:15.4817364Z (conv): QuantizedConv2d(Reference)(3, 3, kernel_size=(3, 3), stride=(1, 1)) 2025-09-09T14:23:15.4817851Z (hardtanh): Hardtanh(min_val=-1.0, max_val=1.0) 2025-09-09T14:23:15.4818185Z ) 2025-09-09T14:23:15.4818294Z 2025-09-09T14:23:15.4818298Z 2025-09-09T14:23:15.4818302Z 2025-09-09T14:23:15.4818394Z def forward(self, x): 2025-09-09T14:23:15.4819108Z quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, 0.018311796709895134, 10, -128, 127, torch.int8); x = None 2025-09-09T14:23:15.4820552Z dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, 0.018311796709895134, 10, -128, 127, torch.int8); quantize_per_tensor_default = None 2025-09-09T14:23:15.4821795Z conv = self.conv(dequantize_per_tensor_default); dequantize_per_tensor_default = None 2025-09-09T14:23:15.4822785Z quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(conv, 0.014523092657327652, -23, -128, 127, torch.int8); conv = None 2025-09-09T14:23:15.4824270Z dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, 0.014523092657327652, -23, -128, 127, torch.int8); quantize_per_tensor_default_1 = None 2025-09-09T14:23:15.4825506Z hardtanh = self.hardtanh(dequantize_per_tensor_default_1); dequantize_per_tensor_default_1 = None 2025-09-09T14:23:15.4826590Z quantize_per_tensor_default_2 = torch.ops.quantized_decomposed.quantize_per_tensor.default(hardtanh, 0.014523092657327652, -23, -128, 127, torch.int8); hardtanh = None 2025-09-09T14:23:15.4828110Z dequantize_per_tensor_default_2 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_2, 0.014523092657327652, -23, -128, 127, torch.int8); quantize_per_tensor_default_2 = None 2025-09-09T14:23:15.4829127Z return dequantize_per_tensor_default_2 2025-09-09T14:23:15.4829436Z 2025-09-09T14:23:15.4829738Z # To see more debug info, please use `graph_module.print_readable()` 2025-09-09T14:23:15.4830166Z diff: tensor([[[[0., 0., 0.], 2025-09-09T14:23:15.4830425Z [0., 0., 0.], 2025-09-09T14:23:15.4830661Z [0., 0., 0.]], 2025-09-09T14:23:15.4830813Z 2025-09-09T14:23:15.4830911Z [[0., 0., 0.], 2025-09-09T14:23:15.4831131Z [0., 0., 0.], 2025-09-09T14:23:15.4831365Z [0., 0., 0.]], 2025-09-09T14:23:15.4831516Z 2025-09-09T14:23:15.4831599Z [[0., 0., 0.], 2025-09-09T14:23:15.4831832Z [0., 0., 0.], 2025-09-09T14:23:15.4832053Z [0., 0., 0.]]]]) 2025-09-09T14:23:15.4832515Z PASSED 2025-09-09T14:23:15.4833235Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQATModels::test_qat_mobilenet_v2 SKIPPED 2025-09-09T14:23:15.4834330Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQATModels::test_qat_resnet18 SKIPPED 2025-09-09T14:23:15.4835501Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizeMixQATAndPTQ::test_mixing_qat_ptq PASSED 2025-09-09T14:23:15.4836490Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_add PASSED 2025-09-09T14:23:15.4837443Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_add_relu PASSED 2025-09-09T14:23:15.4838387Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_conv2d PASSED 2025-09-09T14:23:15.4839388Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_dynamic_linear PASSED 2025-09-09T14:23:15.4840400Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_maxpool2d PASSED 2025-09-09T14:23:15.4841696Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_qdq PASSED 2025-09-09T14:23:15.4843013Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_qdq_per_channel PASSED 2025-09-09T14:23:15.4844510Z test/quantization/pt2e/test_representation.py::TestPT2ERepresentation::test_static_linear PASSED 2025-09-09T14:23:15.4846429Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:15.4847999Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:15.4848568Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:15.4850193Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:15.4851787Z graph_break [] 2025-09-09T14:23:15.4852108Z PASSED 2025-09-09T14:23:15.4853502Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:15.4855031Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:15.4855611Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:15.4856958Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:15.4858157Z graph_break [] 2025-09-09T14:23:15.4858480Z PASSED 2025-09-09T14:23:15.4859834Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:15.4861366Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:15.4861943Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:15.4863908Z inductor [('pattern_matcher_nodes', 7), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_binary_matcher_nodes', 2), ('qlinear_weight_prepack_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:15.4865755Z graph_break [] 2025-09-09T14:23:15.4866068Z PASSED 2025-09-09T14:23:15.4867449Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:15.4868976Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1225454Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1226925Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1228138Z graph_break [] 2025-09-09T14:23:28.1228645Z PASSED 2025-09-09T14:23:28.1230063Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1231654Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:28.1232219Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1233559Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1234796Z graph_break [] 2025-09-09T14:23:28.1235399Z PASSED 2025-09-09T14:23:28.1236791Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1238338Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1238935Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1240279Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1241589Z graph_break [] 2025-09-09T14:23:28.1241919Z PASSED 2025-09-09T14:23:28.1243298Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1244845Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:28.1245426Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1246749Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1247948Z graph_break [] 2025-09-09T14:23:28.1248262Z PASSED 2025-09-09T14:23:28.1249651Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1251198Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1251759Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1253096Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1254221Z graph_break [] 2025-09-09T14:23:28.1254467Z PASSED 2025-09-09T14:23:28.1255503Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1256635Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1257081Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1258282Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1259396Z graph_break [] 2025-09-09T14:23:28.1259647Z PASSED 2025-09-09T14:23:28.1260663Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1261805Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:28.1262233Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1263232Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1264131Z graph_break [] 2025-09-09T14:23:28.1264370Z PASSED 2025-09-09T14:23:28.1265475Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1266610Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:28.1267057Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1268517Z inductor [('pattern_matcher_nodes', 8), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_nodes', 4), ('qlinear_binary_matcher_nodes', 2), ('qlinear_weight_prepack_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1269975Z graph_break [] 2025-09-09T14:23:28.1270234Z PASSED 2025-09-09T14:23:28.1271266Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1272402Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:28.1272848Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1273838Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1274813Z graph_break [] 2025-09-09T14:23:28.1275058Z PASSED 2025-09-09T14:23:28.1276098Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1277249Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1277686Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1278687Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1279581Z graph_break [] 2025-09-09T14:23:28.1279822Z PASSED 2025-09-09T14:23:28.1280857Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1282351Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:28.1282926Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1284248Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1285449Z graph_break [] 2025-09-09T14:23:28.1285785Z PASSED 2025-09-09T14:23:28.1287149Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1288681Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:28.1289245Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1290578Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1291784Z graph_break [] 2025-09-09T14:23:28.1292097Z PASSED 2025-09-09T14:23:28.1293564Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1295084Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:28.1295660Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:28.1296992Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:28.1298186Z graph_break [] 2025-09-09T14:23:28.1298586Z PASSED 2025-09-09T14:23:28.1299951Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:28.1301493Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:28.1302066Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3797862Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3799422Z graph_break [] 2025-09-09T14:23:53.3799955Z PASSED 2025-09-09T14:23:53.3801378Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3802925Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:53.3803509Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3804856Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3806049Z graph_break [] 2025-09-09T14:23:53.3806399Z PASSED 2025-09-09T14:23:53.3807802Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3809320Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:53.3809900Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3811663Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3813161Z graph_break [] 2025-09-09T14:23:53.3823397Z PASSED 2025-09-09T14:23:53.3824612Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3825774Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:53.3826215Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3827235Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3828140Z graph_break [] 2025-09-09T14:23:53.3828497Z PASSED 2025-09-09T14:23:53.3829551Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3830948Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:53.3831405Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3832402Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3833305Z graph_break [] 2025-09-09T14:23:53.3833575Z PASSED 2025-09-09T14:23:53.3834679Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3835982Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:53.3836418Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3837429Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3838337Z graph_break [] 2025-09-09T14:23:53.3838590Z PASSED 2025-09-09T14:23:53.3839620Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3840749Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:23:53.3841198Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3842205Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3843094Z graph_break [] 2025-09-09T14:23:53.3843352Z PASSED 2025-09-09T14:23:53.3844364Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3845521Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:23:53.3845969Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3846947Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3847855Z graph_break [] 2025-09-09T14:23:53.3848102Z PASSED 2025-09-09T14:23:53.3849238Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3850392Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:53.3850828Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3852045Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3853154Z graph_break [] 2025-09-09T14:23:53.3853425Z PASSED 2025-09-09T14:23:53.3854814Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3856349Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:23:53.3856929Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3858361Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3859576Z graph_break [] 2025-09-09T14:23:53.3859909Z PASSED 2025-09-09T14:23:53.3861283Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3862892Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:53.3863463Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3865096Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3866598Z graph_break [] 2025-09-09T14:23:53.3866921Z PASSED 2025-09-09T14:23:53.3868286Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3869791Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:23:53.3870371Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3871715Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3872911Z graph_break [] 2025-09-09T14:23:53.3873248Z PASSED 2025-09-09T14:23:53.3874678Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3876216Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:23:53.3876788Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:23:53.3878132Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:23:53.3879348Z graph_break [] 2025-09-09T14:23:53.3879674Z PASSED 2025-09-09T14:23:53.3881051Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:23:53.3882573Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:23:53.3883163Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1845990Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1847230Z graph_break [] 2025-09-09T14:24:04.1847765Z PASSED 2025-09-09T14:24:04.1849183Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1850867Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1851436Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1853094Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1854287Z graph_break [] 2025-09-09T14:24:04.1854638Z PASSED 2025-09-09T14:24:04.1856016Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1857528Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:04.1858106Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1859555Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1860758Z graph_break [] 2025-09-09T14:24:04.1861091Z PASSED 2025-09-09T14:24:04.1862463Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1864003Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:04.1864566Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1866191Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1867681Z graph_break [] 2025-09-09T14:24:04.1867993Z PASSED 2025-09-09T14:24:04.1869365Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1870882Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1871458Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1872800Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1873989Z graph_break [] 2025-09-09T14:24:04.1874316Z PASSED 2025-09-09T14:24:04.1875746Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1877293Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1877872Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1879658Z inductor [('pattern_matcher_nodes', 7), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_binary_matcher_nodes', 2), ('qlinear_weight_prepack_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1881228Z graph_break [] 2025-09-09T14:24:04.1881549Z PASSED 2025-09-09T14:24:04.1882920Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1884457Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1885021Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1886461Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1887660Z graph_break [] 2025-09-09T14:24:04.1887991Z PASSED 2025-09-09T14:24:04.1889376Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1890904Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:04.1891483Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1892891Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1894126Z graph_break [] 2025-09-09T14:24:04.1894432Z PASSED 2025-09-09T14:24:04.1895823Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1897337Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1897914Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1899240Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1900433Z graph_break [] 2025-09-09T14:24:04.1900749Z PASSED 2025-09-09T14:24:04.1902106Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1903639Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:04.1904201Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1905533Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1906743Z graph_break [] 2025-09-09T14:24:04.1907051Z PASSED 2025-09-09T14:24:04.1908415Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1910111Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1910675Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1912018Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1913208Z graph_break [] 2025-09-09T14:24:04.1913540Z PASSED 2025-09-09T14:24:04.1914965Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1916494Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:04.1917076Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1918691Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1920190Z graph_break [] 2025-09-09T14:24:04.1920505Z PASSED 2025-09-09T14:24:04.1922016Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1923543Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:04.1924104Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:04.1925436Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:04.1926473Z graph_break [] 2025-09-09T14:24:04.1926733Z PASSED 2025-09-09T14:24:04.1927760Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:04.1928889Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:04.1929327Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0858265Z inductor [('pattern_matcher_nodes', 8), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_nodes', 4), ('qlinear_binary_matcher_nodes', 2), ('qlinear_weight_prepack_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0860266Z graph_break [] 2025-09-09T14:24:24.0860929Z PASSED 2025-09-09T14:24:24.0862358Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0863863Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0864473Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0865812Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0867005Z graph_break [] 2025-09-09T14:24:24.0867334Z PASSED 2025-09-09T14:24:24.0868693Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0870222Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:24.0870794Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0872117Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0873310Z graph_break [] 2025-09-09T14:24:24.0873622Z PASSED 2025-09-09T14:24:24.0875045Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0876302Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0876736Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0877737Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0878624Z graph_break [] 2025-09-09T14:24:24.0878880Z PASSED 2025-09-09T14:24:24.0880880Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0882056Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:24.0882497Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0883480Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0884484Z graph_break [] 2025-09-09T14:24:24.0884750Z PASSED 2025-09-09T14:24:24.0885752Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0886876Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0887313Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0888314Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0889204Z graph_break [] 2025-09-09T14:24:24.0889442Z PASSED 2025-09-09T14:24:24.0890460Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0891591Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:24.0892045Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0893252Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0894359Z graph_break [] 2025-09-09T14:24:24.0894595Z PASSED 2025-09-09T14:24:24.0895613Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0896755Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0897188Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0898183Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0899067Z graph_break [] 2025-09-09T14:24:24.0899321Z PASSED 2025-09-09T14:24:24.0900327Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0901459Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:24.0901908Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0903108Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0904229Z graph_break [] 2025-09-09T14:24:24.0904471Z PASSED 2025-09-09T14:24:24.0905566Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0906685Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0907113Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0908104Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0908999Z graph_break [] 2025-09-09T14:24:24.0909299Z PASSED 2025-09-09T14:24:24.0910708Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0911848Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:24.0912290Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0913274Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0914161Z graph_break [] 2025-09-09T14:24:24.0914421Z PASSED 2025-09-09T14:24:24.0915488Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0916623Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0917049Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0918030Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0918922Z graph_break [] 2025-09-09T14:24:24.0919163Z PASSED 2025-09-09T14:24:24.0920372Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0921876Z stats [('calls_captured', 4), ('unique_graphs', 1)] 2025-09-09T14:24:24.0922450Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:24.0923800Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:24.0924987Z graph_break [] 2025-09-09T14:24:24.0925311Z PASSED 2025-09-09T14:24:24.0926660Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:24.0928178Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:24.0928753Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0770055Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0771026Z graph_break [] 2025-09-09T14:24:44.0771496Z PASSED 2025-09-09T14:24:44.0772545Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0773707Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0774447Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0775657Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0776779Z graph_break [] 2025-09-09T14:24:44.0777038Z PASSED 2025-09-09T14:24:44.0778057Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0779308Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:44.0779742Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0780751Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0781641Z graph_break [] 2025-09-09T14:24:44.0781903Z PASSED 2025-09-09T14:24:44.0782924Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0784035Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0784484Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0785690Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0786800Z graph_break [] 2025-09-09T14:24:44.0787054Z PASSED 2025-09-09T14:24:44.0788052Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0789172Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:44.0789603Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0790619Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0791801Z graph_break [] 2025-09-09T14:24:44.0792104Z PASSED 2025-09-09T14:24:44.0793141Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0794288Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0794822Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0795824Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0796712Z graph_break [] 2025-09-09T14:24:44.0796974Z PASSED 2025-09-09T14:24:44.0798151Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0799300Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:44.0799739Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0800854Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0801758Z graph_break [] 2025-09-09T14:24:44.0802010Z PASSED 2025-09-09T14:24:44.0803033Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0804228Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0804660Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0805648Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0806527Z graph_break [] 2025-09-09T14:24:44.0806782Z PASSED 2025-09-09T14:24:44.0807791Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_False_float32_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0808932Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:44.0809373Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0810589Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0811497Z graph_break [] 2025-09-09T14:24:44.0811752Z PASSED 2025-09-09T14:24:44.0812799Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0813953Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0814388Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0815599Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0816710Z graph_break [] 2025-09-09T14:24:44.0816979Z PASSED 2025-09-09T14:24:44.0818014Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0819152Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:44.0819603Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0820592Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0821501Z graph_break [] 2025-09-09T14:24:44.0821756Z PASSED 2025-09-09T14:24:44.0822776Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0823922Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0824351Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0825685Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0826800Z graph_break [] 2025-09-09T14:24:44.0827046Z PASSED 2025-09-09T14:24:44.0828074Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0829199Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:44.0829727Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:44.0830728Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:44.0831618Z graph_break [] 2025-09-09T14:24:44.0831876Z PASSED 2025-09-09T14:24:44.0832910Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:44.0834053Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:44.0834495Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8908192Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8909483Z graph_break [] 2025-09-09T14:24:58.8910157Z PASSED 2025-09-09T14:24:58.8911567Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8913122Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8913697Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8915104Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8916301Z graph_break [] 2025-09-09T14:24:58.8916635Z PASSED 2025-09-09T14:24:58.8918000Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8919540Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:58.8920113Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8921440Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8922646Z graph_break [] 2025-09-09T14:24:58.8922959Z PASSED 2025-09-09T14:24:58.8924328Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8925855Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8926425Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8927763Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8928961Z graph_break [] 2025-09-09T14:24:58.8929286Z PASSED 2025-09-09T14:24:58.8930988Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8932517Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8933093Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8934710Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8935979Z graph_break [] 2025-09-09T14:24:58.8936243Z PASSED 2025-09-09T14:24:58.8937288Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8938419Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:58.8938857Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8939834Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8940729Z graph_break [] 2025-09-09T14:24:58.8940966Z PASSED 2025-09-09T14:24:58.8941994Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8943118Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8943540Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8944748Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8945846Z graph_break [] 2025-09-09T14:24:58.8946092Z PASSED 2025-09-09T14:24:58.8947096Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8948231Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:58.8948674Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8949651Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8950550Z graph_break [] 2025-09-09T14:24:58.8950787Z PASSED 2025-09-09T14:24:58.8951810Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8952956Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8953383Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8954386Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8955362Z graph_break [] 2025-09-09T14:24:58.8955605Z PASSED 2025-09-09T14:24:58.8956743Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8957874Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:58.8958310Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8959289Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8960255Z graph_break [] 2025-09-09T14:24:58.8960507Z PASSED 2025-09-09T14:24:58.8961521Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8962661Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:24:58.8963095Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8964102Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8965000Z graph_break [] 2025-09-09T14:24:58.8965240Z PASSED 2025-09-09T14:24:58.8966264Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8967446Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:58.8968020Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8969362Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8970553Z graph_break [] 2025-09-09T14:24:58.8970875Z PASSED 2025-09-09T14:24:58.8972228Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8973772Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:24:58.8974347Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:24:58.8975963Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:24:58.8977460Z graph_break [] 2025-09-09T14:24:58.8977773Z PASSED 2025-09-09T14:24:58.8979141Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:24:58.8980669Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:24:58.8981232Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7110974Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7112250Z graph_break [] 2025-09-09T14:25:22.7112790Z PASSED 2025-09-09T14:25:22.7114194Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7116125Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:22.7116716Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7118323Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7119860Z graph_break [] 2025-09-09T14:25:22.7120208Z PASSED 2025-09-09T14:25:22.7121750Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7123277Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:22.7123861Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7125189Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7126395Z graph_break [] 2025-09-09T14:25:22.7126718Z PASSED 2025-09-09T14:25:22.7128089Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7129629Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:22.7130196Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7131538Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7132734Z graph_break [] 2025-09-09T14:25:22.7133074Z PASSED 2025-09-09T14:25:22.7134448Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7135968Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:22.7136545Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7137865Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7138922Z graph_break [] 2025-09-09T14:25:22.7139181Z PASSED 2025-09-09T14:25:22.7140194Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7141326Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:22.7141751Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7142742Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7143630Z graph_break [] 2025-09-09T14:25:22.7143869Z PASSED 2025-09-09T14:25:22.7144889Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7146013Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:22.7146450Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7147525Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7148406Z graph_break [] 2025-09-09T14:25:22.7148658Z PASSED 2025-09-09T14:25:22.7149656Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7150853Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:22.7151291Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7152486Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7153596Z graph_break [] 2025-09-09T14:25:22.7153834Z PASSED 2025-09-09T14:25:22.7154922Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7156053Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:25:22.7156477Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7157474Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7158354Z graph_break [] 2025-09-09T14:25:22.7158612Z PASSED 2025-09-09T14:25:22.7159622Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7160735Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:22.7161175Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7162373Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7163475Z graph_break [] 2025-09-09T14:25:22.7163731Z PASSED 2025-09-09T14:25:22.7165088Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7166602Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:25:22.7167164Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7168492Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7169698Z graph_break [] 2025-09-09T14:25:22.7170015Z PASSED 2025-09-09T14:25:22.7171388Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7172907Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:22.7173488Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7174909Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7176109Z graph_break [] 2025-09-09T14:25:22.7176447Z PASSED 2025-09-09T14:25:22.7177800Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7179324Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:25:22.7179961Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:22.7181299Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:22.7182506Z graph_break [] 2025-09-09T14:25:22.7182830Z PASSED 2025-09-09T14:25:22.7184195Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:22.7185708Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:22.7186287Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8081007Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8082286Z graph_break [] 2025-09-09T14:25:35.8082818Z PASSED 2025-09-09T14:25:35.8084248Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_bfloat16_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8085791Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:25:35.8086380Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8087716Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8088925Z graph_break [] 2025-09-09T14:25:35.8089272Z PASSED 2025-09-09T14:25:35.8090643Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8092187Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:35.8092752Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8094388Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8095888Z graph_break [] 2025-09-09T14:25:35.8096206Z PASSED 2025-09-09T14:25:35.8097579Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8099095Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8099675Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8101011Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8102207Z graph_break [] 2025-09-09T14:25:35.8103758Z PASSED 2025-09-09T14:25:35.8105153Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8106676Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:35.8107261Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8108817Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8110552Z graph_break [] 2025-09-09T14:25:35.8110834Z PASSED 2025-09-09T14:25:35.8111859Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8112999Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8113428Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8114423Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8115399Z graph_break [] 2025-09-09T14:25:35.8115663Z PASSED 2025-09-09T14:25:35.8116693Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8117834Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:35.8118283Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8119266Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8120165Z graph_break [] 2025-09-09T14:25:35.8120406Z PASSED 2025-09-09T14:25:35.8121421Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8122564Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8122990Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8123991Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8124888Z graph_break [] 2025-09-09T14:25:35.8125125Z PASSED 2025-09-09T14:25:35.8126147Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8127267Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:35.8127706Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8128692Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8129585Z graph_break [] 2025-09-09T14:25:35.8129837Z PASSED 2025-09-09T14:25:35.8130977Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8132209Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8132802Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8134151Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8135459Z graph_break [] 2025-09-09T14:25:35.8135782Z PASSED 2025-09-09T14:25:35.8137154Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8138655Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8139232Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8140861Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8142354Z graph_break [] 2025-09-09T14:25:35.8142684Z PASSED 2025-09-09T14:25:35.8144031Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8145546Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:35.8146120Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8147447Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8148650Z graph_break [] 2025-09-09T14:25:35.8148966Z PASSED 2025-09-09T14:25:35.8150320Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8151852Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:35.8152419Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:35.8154049Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:35.8155598Z graph_break [] 2025-09-09T14:25:35.8155943Z PASSED 2025-09-09T14:25:35.8157296Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:35.8158797Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:35.8159373Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8444783Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8446074Z graph_break [] 2025-09-09T14:25:55.8446588Z PASSED 2025-09-09T14:25:55.8448371Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8450002Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:55.8450571Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8451913Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8453122Z graph_break [] 2025-09-09T14:25:55.8453577Z PASSED 2025-09-09T14:25:55.8454945Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8456457Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8457037Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8458374Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8459563Z graph_break [] 2025-09-09T14:25:55.8459892Z PASSED 2025-09-09T14:25:55.8461240Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8462763Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:55.8463337Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8464658Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8465866Z graph_break [] 2025-09-09T14:25:55.8466178Z PASSED 2025-09-09T14:25:55.8467534Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_False_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8469045Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8469604Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8470899Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8471789Z graph_break [] 2025-09-09T14:25:55.8472041Z PASSED 2025-09-09T14:25:55.8473053Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8474193Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:55.8474726Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8475931Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8477053Z graph_break [] 2025-09-09T14:25:55.8477297Z PASSED 2025-09-09T14:25:55.8478305Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8479433Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8479934Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8480930Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8481826Z graph_break [] 2025-09-09T14:25:55.8482065Z PASSED 2025-09-09T14:25:55.8483085Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8484266Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:55.8484705Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8485900Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8487003Z graph_break [] 2025-09-09T14:25:55.8487256Z PASSED 2025-09-09T14:25:55.8488260Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8489392Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8489825Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8490821Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8491712Z graph_break [] 2025-09-09T14:25:55.8491947Z PASSED 2025-09-09T14:25:55.8492974Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8494207Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:55.8494648Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8495642Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8496530Z graph_break [] 2025-09-09T14:25:55.8496785Z PASSED 2025-09-09T14:25:55.8497785Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8498919Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8499361Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8500339Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8501229Z graph_break [] 2025-09-09T14:25:55.8501497Z PASSED 2025-09-09T14:25:55.8502866Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8504382Z stats [('calls_captured', 5), ('unique_graphs', 1)] 2025-09-09T14:25:55.8504939Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8506364Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8507553Z graph_break [] 2025-09-09T14:25:55.8507874Z PASSED 2025-09-09T14:25:55.8509223Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_False_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8510894Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:25:55.8511601Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:25:55.8512916Z inductor [('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_nodes', 4), ('qlinear_weight_prepack_matcher_count', 1), ('pattern_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:25:55.8514120Z graph_break [] 2025-09-09T14:25:55.8514450Z PASSED 2025-09-09T14:25:55.8515876Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:25:55.8517392Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:25:55.8517956Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8122260Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8123837Z graph_break [] 2025-09-09T14:27:55.8124368Z PASSED 2025-09-09T14:27:55.8125755Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_1_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8127272Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:27:55.8127850Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8129176Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8130380Z graph_break [] 2025-09-09T14:27:55.8130710Z PASSED 2025-09-09T14:27:55.8132098Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8133618Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:27:55.8134200Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8135816Z inductor [('pattern_matcher_nodes', 6), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8137313Z graph_break [] 2025-09-09T14:27:55.8137634Z PASSED 2025-09-09T14:27:55.8138990Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_1_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8140501Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:27:55.8141067Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8142734Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8143938Z graph_break [] 2025-09-09T14:27:55.8144282Z PASSED 2025-09-09T14:27:55.8145659Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8147172Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:27:55.8147755Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8149216Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8150214Z graph_break [] 2025-09-09T14:27:55.8150475Z PASSED 2025-09-09T14:27:55.8151482Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_32_inplace_add_False_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8152601Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:27:55.8153030Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8154025Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8155009Z graph_break [] 2025-09-09T14:27:55.8155261Z PASSED 2025-09-09T14:27:55.8156272Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8157396Z stats [('calls_captured', 6), ('unique_graphs', 1)] 2025-09-09T14:27:55.8157844Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8158817Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8159710Z graph_break [] 2025-09-09T14:27:55.8159961Z PASSED 2025-09-09T14:27:55.8160951Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_da8w8_sym_act_sym_wgt_with_int_mm_has_bias_True_float32_dynamic_True_reshape_a_True_M_32_inplace_add_True_expand_a_scale_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8162072Z stats [('calls_captured', 8), ('unique_graphs', 1)] 2025-09-09T14:27:55.8162498Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8163492Z inductor [('pattern_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8164385Z graph_break [] 2025-09-09T14:27:55.8164622Z PASSED 2025-09-09T14:27:55.8165321Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_dynamic_qlinear_cpu stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:27:55.8166076Z inline_call [] 2025-09-09T14:27:55.8166307Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8166669Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8167894Z inductor [('pattern_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:27:55.8169005Z graph_break [] 2025-09-09T14:27:55.8169242Z PASSED 2025-09-09T14:27:55.8170886Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_dynamic_qlinear_input_dim_exceeds_2 stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:27:55.8171731Z inline_call [] 2025-09-09T14:27:55.8171967Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8172346Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8173556Z inductor [('pattern_matcher_nodes', 18), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 8), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:27:55.8174755Z graph_break [] 2025-09-09T14:27:55.8175007Z PASSED 2025-09-09T14:27:55.8175735Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_dynamic_qlinear_qat_cpu stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:27:55.8176531Z inline_call [] 2025-09-09T14:27:55.8176756Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8177140Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8178344Z inductor [('pattern_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:27:55.8179461Z graph_break [] 2025-09-09T14:27:55.8179697Z PASSED 2025-09-09T14:27:55.8180404Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_linear_dynamic_fp16 stats [('calls_captured', 20), ('unique_graphs', 16)] 2025-09-09T14:27:55.8181183Z inline_call [] 2025-09-09T14:27:55.8181400Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:27:55.8181773Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:27:55.8182766Z inductor [('pattern_matcher_nodes', 15), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 5), ('extern_calls', 4), ('qlinear_weight_prepack_matcher_count', 2), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:27:55.8183682Z graph_break [] 2025-09-09T14:27:55.8183923Z PASSED 2025-09-09T14:27:55.8184651Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_linear_relu_dynamic_fp16 stats [('calls_captured', 24), ('unique_graphs', 16)] 2025-09-09T14:27:55.8185454Z inline_call [] 2025-09-09T14:27:55.8185677Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:27:55.8186054Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:27:55.8187041Z inductor [('pattern_matcher_nodes', 17), ('qlinear_weight_prepack_matcher_nodes', 14), ('pattern_matcher_count', 5), ('extern_calls', 4), ('qlinear_weight_prepack_matcher_count', 2), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:27:55.8187945Z graph_break [] 2025-09-09T14:27:55.8188198Z PASSED 2025-09-09T14:27:55.8188866Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d stats [('calls_captured', 986), ('unique_graphs', 116)] 2025-09-09T14:27:55.8189625Z inline_call [] 2025-09-09T14:27:55.8189848Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:27:55.8190224Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:27:55.8191599Z inductor [('pattern_matcher_nodes', 7), ('qconv_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qconv_unary_matcher_nodes', 2), ('qconv_weight_prepack_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:27:55.8192898Z graph_break [] 2025-09-09T14:27:55.8193152Z PASSED 2025-09-09T14:27:55.8193833Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_add stats [('calls_captured', 995), ('unique_graphs', 116)] 2025-09-09T14:27:55.8194688Z inline_call [] 2025-09-09T14:27:55.8194916Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8501051Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8513063Z inductor [('pattern_matcher_nodes', 17), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 7), ('qconv2d_binary_matcher_nodes', 4), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_nodes', 2), ('extern_calls', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qconv2d_binary_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qconv2d_binary_lower_count', 1), ('qconv2d_binary_lower_nodes', 1)] 2025-09-09T14:31:43.8516310Z graph_break [] 2025-09-09T14:31:43.8516860Z PASSED 2025-09-09T14:31:43.8517845Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_add_relu stats [('calls_captured', 997), ('unique_graphs', 116)] 2025-09-09T14:31:43.8518906Z inline_call [] 2025-09-09T14:31:43.8519200Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8519708Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8522613Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 7), ('qconv2d_binary_matcher_nodes', 5), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_nodes', 2), ('extern_calls', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qconv2d_binary_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qconv2d_binary_lower_count', 1), ('qconv2d_binary_lower_nodes', 1)] 2025-09-09T14:31:43.8525357Z graph_break [] 2025-09-09T14:31:43.8525697Z PASSED 2025-09-09T14:31:43.8526678Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_hardswish stats [('calls_captured', 996), ('unique_graphs', 116)] 2025-09-09T14:31:43.8527736Z inline_call [] 2025-09-09T14:31:43.8528039Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8528520Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8530394Z inductor [('pattern_matcher_nodes', 24), ('qconv_unary_matcher_nodes', 14), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8531754Z graph_break [] 2025-09-09T14:31:43.8532004Z PASSED 2025-09-09T14:31:43.8532731Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_hardtanh stats [('calls_captured', 996), ('unique_graphs', 116)] 2025-09-09T14:31:43.8533503Z inline_call [] 2025-09-09T14:31:43.8533732Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8534090Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8535499Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8536780Z graph_break [] 2025-09-09T14:31:43.8537021Z PASSED 2025-09-09T14:31:43.8537719Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_relu stats [('calls_captured', 996), ('unique_graphs', 116)] 2025-09-09T14:31:43.8538481Z inline_call [] 2025-09-09T14:31:43.8538699Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8539078Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8540578Z inductor [('pattern_matcher_nodes', 16), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_unary_matcher_nodes', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8541886Z graph_break [] 2025-09-09T14:31:43.8542132Z PASSED 2025-09-09T14:31:43.8542840Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_relu6 stats [('calls_captured', 996), ('unique_graphs', 116)] 2025-09-09T14:31:43.8543612Z inline_call [] 2025-09-09T14:31:43.8543831Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8544205Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8545653Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8546949Z graph_break [] 2025-09-09T14:31:43.8547205Z PASSED 2025-09-09T14:31:43.8547885Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qat_qconv2d_silu stats [('calls_captured', 996), ('unique_graphs', 116)] 2025-09-09T14:31:43.8548647Z inline_call [] 2025-09-09T14:31:43.8548864Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8549237Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8550615Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8551914Z graph_break [] 2025-09-09T14:31:43.8552167Z PASSED 2025-09-09T14:31:43.8552795Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qcat stats [('calls_captured', 26), ('unique_graphs', 8)] 2025-09-09T14:31:43.8553508Z inline_call [] 2025-09-09T14:31:43.8553726Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8554102Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8555746Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 7), ('qconv_unary_matcher_nodes', 4), ('qcat_matcher_nodes', 4), ('extern_calls', 4), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('fxgraph_cache_bypass', 1), ('qcat_matcher_count', 1)] 2025-09-09T14:31:43.8557187Z graph_break [] 2025-09-09T14:31:43.8557450Z PASSED 2025-09-09T14:31:43.8558123Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv1d_relu_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:31:43.8558876Z inline_call [] 2025-09-09T14:31:43.8559109Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8559477Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8560877Z inductor [('pattern_matcher_nodes', 13), ('qconv_weight_prepack_matcher_nodes', 6), ('pattern_matcher_count', 6), ('qconv_unary_matcher_nodes', 5), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:31:43.8562162Z graph_break [] 2025-09-09T14:31:43.8562414Z PASSED 2025-09-09T14:31:43.8563092Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_2 stats [('calls_captured', 13), ('unique_graphs', 8)] 2025-09-09T14:31:43.8563819Z inline_call [] 2025-09-09T14:31:43.8564051Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8564413Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8565671Z inductor [('pattern_matcher_nodes', 5), ('qconv_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qconv_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:31:43.8566744Z graph_break [] 2025-09-09T14:31:43.8566985Z PASSED 2025-09-09T14:31:43.8567664Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_3 stats [('calls_captured', 29), ('unique_graphs', 8)] 2025-09-09T14:31:43.8568387Z inline_call [] 2025-09-09T14:31:43.8568623Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8569046Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8571340Z inductor [('pattern_matcher_nodes', 18), ('pattern_matcher_count', 8), ('qconv_weight_prepack_matcher_nodes', 7), ('qcat_matcher_nodes', 4), ('extern_calls', 4), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_nodes', 2), ('qconv2d_binary_matcher_nodes', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qconv_unary_matcher_count', 1), ('qconv2d_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv2d_binary_lower_count', 1), ('qconv2d_binary_lower_nodes', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qcat_matcher_count', 1)] 2025-09-09T14:31:43.8573552Z graph_break [] 2025-09-09T14:31:43.8573795Z PASSED 2025-09-09T14:31:43.8574555Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_broadcast_shapes_cpu stats [('calls_captured', 15), ('unique_graphs', 8)] 2025-09-09T14:31:43.8575386Z inline_call [] 2025-09-09T14:31:43.8575605Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:31:43.8575984Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:31:43.8577147Z inductor [('pattern_matcher_nodes', 5), ('qconv_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 2), ('qconv_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:31:43.8578224Z graph_break [] 2025-09-09T14:31:43.8578468Z PASSED 2025-09-09T14:31:43.8579008Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_cpu inline_call [] 2025-09-09T14:33:37.6754523Z stats [('calls_captured', 24), ('unique_graphs', 8)] 2025-09-09T14:33:37.6754971Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6755343Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6757150Z inductor [('pattern_matcher_nodes', 16), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv2d_binary_matcher_nodes', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv2d_binary_matcher_count', 2), ('qconv2d_binary_lower_count', 2), ('qconv2d_binary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6759269Z graph_break [] 2025-09-09T14:33:37.6759670Z PASSED 2025-09-09T14:33:37.6760300Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_int8_mixed_bf16 inline_call [] 2025-09-09T14:33:37.6760999Z stats [('calls_captured', 24), ('unique_graphs', 8)] 2025-09-09T14:33:37.6761357Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6761734Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6763173Z inductor [('pattern_matcher_nodes', 20), ('qconv_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 6), ('qconv2d_binary_matcher_nodes', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv2d_binary_matcher_count', 2), ('qconv2d_binary_lower_count', 2), ('qconv2d_binary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6764518Z graph_break [] 2025-09-09T14:33:37.6764760Z PASSED 2025-09-09T14:33:37.6765322Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_relu_cpu inline_call [] 2025-09-09T14:33:37.6766008Z stats [('calls_captured', 28), ('unique_graphs', 8)] 2025-09-09T14:33:37.6766693Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6767080Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6768531Z inductor [('pattern_matcher_nodes', 18), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv2d_binary_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv2d_binary_matcher_count', 2), ('qconv2d_binary_lower_count', 2), ('qconv2d_binary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6769990Z graph_break [] 2025-09-09T14:33:37.6770250Z PASSED 2025-09-09T14:33:37.6770852Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_relu_int8_mixed_bf16 inline_call [] 2025-09-09T14:33:37.6771595Z stats [('calls_captured', 28), ('unique_graphs', 8)] 2025-09-09T14:33:37.6771941Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6772315Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6773772Z inductor [('pattern_matcher_nodes', 22), ('qconv_weight_prepack_matcher_nodes', 12), ('qconv2d_binary_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv2d_binary_matcher_count', 2), ('qconv2d_binary_lower_count', 2), ('qconv2d_binary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6775114Z graph_break [] 2025-09-09T14:33:37.6775361Z PASSED 2025-09-09T14:33:37.6776013Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_cpu stats [('calls_captured', 21), ('unique_graphs', 8)] 2025-09-09T14:33:37.6776747Z inline_call [] 2025-09-09T14:33:37.6776963Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6777336Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6778724Z inductor [('pattern_matcher_nodes', 19), ('qconv_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 8), ('qconv_unary_matcher_nodes', 4), ('qconv_weight_prepack_matcher_count', 3), ('qconv_unary_lower_count', 3), ('qconv_unary_lower_nodes', 3), ('extern_calls', 3), ('qconv_unary_matcher_count', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6779992Z graph_break [] 2025-09-09T14:33:37.6780239Z PASSED 2025-09-09T14:33:37.6780973Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_dequant_promotion_cpu stats [('calls_captured', 24), ('unique_graphs', 8)] 2025-09-09T14:33:37.6781781Z inline_call [] 2025-09-09T14:33:37.6782010Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6782403Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6784541Z inductor [('pattern_matcher_nodes', 22), ('qconv_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 10), ('qconv_unary_matcher_nodes', 4), ('qconv_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('qconv_unary_matcher_count', 2), ('qconv2d_binary_matcher_nodes', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qconv2d_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv2d_binary_lower_count', 1), ('qconv2d_binary_lower_nodes', 1)] 2025-09-09T14:33:37.6786604Z graph_break [] 2025-09-09T14:33:37.6786854Z PASSED 2025-09-09T14:33:37.6787551Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_hardswish_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6788325Z inline_call [] 2025-09-09T14:33:37.6788550Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6788924Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6790369Z inductor [('pattern_matcher_nodes', 23), ('qconv_unary_matcher_nodes', 13), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6791661Z graph_break [] 2025-09-09T14:33:37.6791910Z PASSED 2025-09-09T14:33:37.6792670Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_hardswish_int8_mixed_bf16_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6793501Z inline_call [] 2025-09-09T14:33:37.6793719Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6794093Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6795656Z inductor [('pattern_matcher_nodes', 33), ('qconv_unary_matcher_nodes', 17), ('qconv_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 8), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6796952Z graph_break [] 2025-09-09T14:33:37.6797212Z PASSED 2025-09-09T14:33:37.6797914Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_hardtanh_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6798685Z inline_call [] 2025-09-09T14:33:37.6798917Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6799281Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6800671Z inductor [('pattern_matcher_nodes', 17), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 7), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6801941Z graph_break [] 2025-09-09T14:33:37.6802187Z PASSED 2025-09-09T14:33:37.6802952Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_hardtanh_int8_mixed_bf16_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6803780Z inline_call [] 2025-09-09T14:33:37.6804011Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6804372Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6805763Z inductor [('pattern_matcher_nodes', 27), ('qconv_weight_prepack_matcher_nodes', 12), ('qconv_unary_matcher_nodes', 11), ('pattern_matcher_count', 8), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6807060Z graph_break [] 2025-09-09T14:33:37.6807296Z PASSED 2025-09-09T14:33:37.6808010Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_int8_mixed_bf16 stats [('calls_captured', 21), ('unique_graphs', 8)] 2025-09-09T14:33:37.6808781Z inline_call [] 2025-09-09T14:33:37.6809005Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6809396Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6810990Z inductor [('pattern_matcher_nodes', 25), ('qconv_weight_prepack_matcher_nodes', 18), ('pattern_matcher_count', 8), ('qconv_unary_matcher_nodes', 4), ('qconv_weight_prepack_matcher_count', 3), ('qconv_unary_lower_count', 3), ('qconv_unary_lower_nodes', 3), ('extern_calls', 3), ('qconv_unary_matcher_count', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6812277Z graph_break [] 2025-09-09T14:33:37.6812543Z PASSED 2025-09-09T14:33:37.6813228Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_relu6_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6813994Z inline_call [] 2025-09-09T14:33:37.6814217Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6814595Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:33:37.6816136Z inductor [('pattern_matcher_nodes', 17), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 7), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:33:37.6817424Z graph_break [] 2025-09-09T14:33:37.6817679Z PASSED 2025-09-09T14:33:37.6818356Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_relu_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:33:37.6819115Z inline_call [] 2025-09-09T14:33:37.6819352Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:33:37.6819796Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3138107Z inductor [('pattern_matcher_nodes', 15), ('qconv_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qconv_unary_matcher_nodes', 5), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:37:28.3139520Z graph_break [] 2025-09-09T14:37:28.3139955Z PASSED 2025-09-09T14:37:28.3140732Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_relu_int8_mixed_bf16_xpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:37:28.3141555Z inline_call [] 2025-09-09T14:37:28.3141793Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:37:28.3142166Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3143572Z inductor [('pattern_matcher_nodes', 19), ('qconv_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 6), ('qconv_unary_matcher_nodes', 5), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:37:28.3144880Z graph_break [] 2025-09-09T14:37:28.3145146Z PASSED 2025-09-09T14:37:28.3145845Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_silu_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:37:28.3146601Z inline_call [] 2025-09-09T14:37:28.3146823Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:37:28.3147199Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3148578Z inductor [('pattern_matcher_nodes', 17), ('qconv_weight_prepack_matcher_nodes', 8), ('qconv_unary_matcher_nodes', 7), ('pattern_matcher_count', 6), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:37:28.3149869Z graph_break [] 2025-09-09T14:37:28.3150106Z PASSED 2025-09-09T14:37:28.3150852Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_silu_int8_mixed_bf16_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:37:28.3151663Z inline_call [] 2025-09-09T14:37:28.3151887Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:37:28.3152264Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3153641Z inductor [('pattern_matcher_nodes', 27), ('qconv_weight_prepack_matcher_nodes', 12), ('qconv_unary_matcher_nodes', 11), ('pattern_matcher_count', 8), ('qconv_weight_prepack_matcher_count', 2), ('qconv_unary_matcher_count', 2), ('qconv_unary_lower_count', 2), ('qconv_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:37:28.3155014Z graph_break [] 2025-09-09T14:37:28.3155280Z PASSED 2025-09-09T14:37:28.3155990Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_with_concat_cpu stats [('calls_captured', 32), ('unique_graphs', 8)] 2025-09-09T14:37:28.3156782Z inline_call [] 2025-09-09T14:37:28.3157003Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:37:28.3157387Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3159619Z inductor [('pattern_matcher_nodes', 30), ('pattern_matcher_count', 14), ('qconv_weight_prepack_matcher_nodes', 13), ('qconv_unary_matcher_nodes', 6), ('extern_calls', 6), ('qcat_matcher_nodes', 5), ('qconv_weight_prepack_matcher_count', 4), ('qconv_unary_lower_count', 4), ('qconv_unary_lower_nodes', 4), ('qconv_unary_matcher_count', 3), ('dequant_promotion_matcher_count', 2), ('dequant_promotion_matcher_nodes', 2), ('fxgraph_cache_bypass', 1), ('qcat_matcher_count', 1)] 2025-09-09T14:37:28.3161359Z graph_break [] 2025-09-09T14:37:28.3161618Z PASSED 2025-09-09T14:37:28.3162400Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qflatten stats [('calls_captured', 27), ('unique_graphs', 8)] 2025-09-09T14:37:28.3163129Z inline_call [] 2025-09-09T14:37:28.3163371Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:37:28.3163733Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:37:28.3165329Z inductor [('pattern_matcher_nodes', 12), ('pattern_matcher_count', 5), ('qconv_weight_prepack_matcher_nodes', 4), ('qconv_unary_matcher_nodes', 3), ('qreshape_matcher_nodes', 3), ('qconv_weight_prepack_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qreshape_matcher_count', 1), ('extern_calls', 1)] 2025-09-09T14:37:28.3166807Z graph_break [] 2025-09-09T14:37:28.3167059Z PASSED 2025-09-09T14:37:28.3167778Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_False_is_qat_False_is_dynamic_False inline_call [] 2025-09-09T14:37:28.3168608Z stats [('calls_captured', 56), ('unique_graphs', 16)] 2025-09-09T14:37:28.3168967Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:37:28.3169328Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:37:28.3171403Z inductor [('pattern_matcher_nodes', 102), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 10), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:37:28.3173413Z graph_break [] 2025-09-09T14:37:28.3173665Z PASSED 2025-09-09T14:37:28.3174365Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_False_is_qat_False_is_dynamic_True inline_call [] 2025-09-09T14:37:28.3175211Z stats [('calls_captured', 60), ('unique_graphs', 16)] 2025-09-09T14:37:28.3175558Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:37:28.3175932Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:37:28.3178031Z inductor [('pattern_matcher_nodes', 101), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 9), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:37:28.3180023Z graph_break [] 2025-09-09T14:37:28.3180271Z PASSED 2025-09-09T14:37:28.3180993Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_False_is_qat_True_is_dynamic_False inline_call [] 2025-09-09T14:37:28.3181829Z stats [('calls_captured', 56), ('unique_graphs', 16)] 2025-09-09T14:37:28.3182191Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:37:28.3182573Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:37:28.3184916Z inductor [('pattern_matcher_nodes', 102), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 10), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:37:28.3186937Z graph_break [] 2025-09-09T14:37:28.3187254Z PASSED 2025-09-09T14:37:28.3187971Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_False_is_qat_True_is_dynamic_True inline_call [] 2025-09-09T14:37:28.3188797Z stats [('calls_captured', 60), ('unique_graphs', 16)] 2025-09-09T14:37:28.3189161Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:37:28.3189526Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:37:28.3191638Z inductor [('pattern_matcher_nodes', 101), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 9), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:37:28.3193633Z graph_break [] 2025-09-09T14:37:28.3193871Z PASSED 2025-09-09T14:37:28.3194650Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_True_is_qat_False_is_dynamic_False inline_call [] 2025-09-09T14:37:28.3195498Z stats [('calls_captured', 64), ('unique_graphs', 16)] 2025-09-09T14:37:28.3195849Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:37:28.3196232Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:37:28.3198307Z inductor [('pattern_matcher_nodes', 106), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 14), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:37:28.3200313Z graph_break [] 2025-09-09T14:37:28.3200570Z PASSED 2025-09-09T14:45:58.1583505Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_True_is_qat_False_is_dynamic_True inline_call [] 2025-09-09T14:45:58.1584372Z stats [('calls_captured', 68), ('unique_graphs', 16)] 2025-09-09T14:45:58.1584771Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1585183Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1587290Z inductor [('pattern_matcher_nodes', 105), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 13), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1589320Z graph_break [] 2025-09-09T14:45:58.1589721Z PASSED 2025-09-09T14:45:58.1590422Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_True_is_qat_True_is_dynamic_False inline_call [] 2025-09-09T14:45:58.1591257Z stats [('calls_captured', 64), ('unique_graphs', 16)] 2025-09-09T14:45:58.1592003Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1592397Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1594511Z inductor [('pattern_matcher_nodes', 106), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 14), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1596769Z graph_break [] 2025-09-09T14:45:58.1597032Z PASSED 2025-09-09T14:45:58.1597737Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_cpu_use_relu_True_is_qat_True_is_dynamic_True inline_call [] 2025-09-09T14:45:58.1598581Z stats [('calls_captured', 68), ('unique_graphs', 16)] 2025-09-09T14:45:58.1598932Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1599310Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1601387Z inductor [('pattern_matcher_nodes', 105), ('pattern_matcher_count', 48), ('qlinear_weight_prepack_matcher_nodes', 48), ('qlinear_binary_matcher_nodes', 13), ('dequant_promotion_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_count', 8), ('extern_calls', 8), ('removed_pointless_view_pair', 4), ('dequant_promotion_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1603402Z graph_break [] 2025-09-09T14:45:58.1603659Z PASSED 2025-09-09T14:45:58.1604414Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_False_is_dynamic_False inline_call [] 2025-09-09T14:45:58.1605313Z stats [('calls_captured', 72), ('unique_graphs', 16)] 2025-09-09T14:45:58.1605665Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1606045Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1608290Z inductor [('pattern_matcher_nodes', 108), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('dequant_promotion_matcher_nodes', 10), ('qlinear_binary_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1610597Z graph_break [] 2025-09-09T14:45:58.1610848Z PASSED 2025-09-09T14:45:58.1611618Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_False_is_dynamic_True inline_call [] 2025-09-09T14:45:58.1612492Z stats [('calls_captured', 76), ('unique_graphs', 16)] 2025-09-09T14:45:58.1612850Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1613209Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1615432Z inductor [('pattern_matcher_nodes', 107), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('dequant_promotion_matcher_nodes', 10), ('qlinear_binary_matcher_nodes', 9), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1617654Z graph_break [] 2025-09-09T14:45:58.1617894Z PASSED 2025-09-09T14:45:58.1618650Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_True_is_dynamic_False inline_call [] 2025-09-09T14:45:58.1619535Z stats [('calls_captured', 72), ('unique_graphs', 16)] 2025-09-09T14:45:58.1619882Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1620255Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1622451Z inductor [('pattern_matcher_nodes', 108), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('dequant_promotion_matcher_nodes', 10), ('qlinear_binary_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1624667Z graph_break [] 2025-09-09T14:45:58.1624917Z PASSED 2025-09-09T14:45:58.1625655Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_True_is_dynamic_True inline_call [] 2025-09-09T14:45:58.1626539Z stats [('calls_captured', 76), ('unique_graphs', 16)] 2025-09-09T14:45:58.1626920Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1627294Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1629512Z inductor [('pattern_matcher_nodes', 107), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('dequant_promotion_matcher_nodes', 10), ('qlinear_binary_matcher_nodes', 9), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1631629Z graph_break [] 2025-09-09T14:45:58.1631868Z PASSED 2025-09-09T14:45:58.1632621Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_True_is_qat_False_is_dynamic_False inline_call [] 2025-09-09T14:45:58.1633486Z stats [('calls_captured', 80), ('unique_graphs', 16)] 2025-09-09T14:45:58.1633856Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1634216Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1636505Z inductor [('pattern_matcher_nodes', 112), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('qlinear_binary_matcher_nodes', 14), ('dequant_promotion_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:45:58.1638632Z graph_break [] 2025-09-09T14:45:58.1638873Z PASSED 2025-09-09T14:45:58.1639627Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_True_is_qat_False_is_dynamic_True inline_call [] 2025-09-09T14:45:58.1640515Z stats [('calls_captured', 84), ('unique_graphs', 16)] 2025-09-09T14:45:58.1640863Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:45:58.1641239Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:45:58.1643515Z inductor [('pattern_matcher_nodes', 111), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('qlinear_binary_matcher_nodes', 13), ('dequant_promotion_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:49:23.9774226Z graph_break [] 2025-09-09T14:49:23.9774757Z PASSED 2025-09-09T14:49:23.9775641Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_True_is_qat_True_is_dynamic_False inline_call [] 2025-09-09T14:49:23.9776994Z stats [('calls_captured', 80), ('unique_graphs', 16)] 2025-09-09T14:49:23.9777393Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:49:23.9777820Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:49:23.9780251Z inductor [('pattern_matcher_nodes', 112), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('qlinear_binary_matcher_nodes', 14), ('dequant_promotion_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:49:23.9782576Z graph_break [] 2025-09-09T14:49:23.9782903Z PASSED 2025-09-09T14:49:23.9783694Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_add_int8_mixed_bf16_use_relu_True_is_qat_True_is_dynamic_True inline_call [] 2025-09-09T14:49:23.9784631Z stats [('calls_captured', 84), ('unique_graphs', 16)] 2025-09-09T14:49:23.9784982Z frames [('total', 2), ('ok', 2)] 2025-09-09T14:49:23.9785417Z aot_autograd [('total', 2), ('autograd_cache_bypass', 2), ('ok', 2)] 2025-09-09T14:49:23.9787823Z inductor [('pattern_matcher_nodes', 111), ('qlinear_weight_prepack_matcher_nodes', 56), ('pattern_matcher_count', 44), ('qlinear_binary_matcher_nodes', 13), ('dequant_promotion_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_count', 8), ('qlinear_unary_matcher_nodes', 8), ('extern_calls', 8), ('dequant_promotion_matcher_count', 4), ('qlinear_unary_matcher_count', 4), ('qlinear_binary_matcher_count', 4), ('qlinear_unary_lower_count', 4), ('qlinear_unary_lower_nodes', 4), ('qlinear_binary_lower_count', 4), ('qlinear_binary_lower_nodes', 4), ('fxgraph_cache_bypass', 2)] 2025-09-09T14:49:23.9790115Z graph_break [] 2025-09-09T14:49:23.9790371Z PASSED 2025-09-09T14:49:23.9791084Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_cpu stats [('calls_captured', 16), ('unique_graphs', 8)] 2025-09-09T14:49:23.9791880Z inline_call [] 2025-09-09T14:49:23.9792166Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9792535Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9794101Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:49:23.9795665Z graph_break [] 2025-09-09T14:49:23.9795962Z PASSED 2025-09-09T14:49:23.9796782Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_dequant_promotion_cpu stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:49:23.9797647Z inline_call [] 2025-09-09T14:49:23.9797881Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9798301Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9800909Z inductor [('pattern_matcher_nodes', 20), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('qlinear_unary_matcher_nodes', 2), ('qlinear_binary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qlinear_unary_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1)] 2025-09-09T14:49:23.9803273Z graph_break [] 2025-09-09T14:49:23.9803574Z PASSED 2025-09-09T14:49:23.9804464Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_dequant_promotion_cpu_input_dim_exceeds_2 stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:49:23.9805414Z inline_call [] 2025-09-09T14:49:23.9805632Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9806062Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9808440Z inductor [('pattern_matcher_nodes', 33), ('qlinear_weight_prepack_matcher_nodes', 18), ('pattern_matcher_count', 15), ('qlinear_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('dequant_promotion_matcher_nodes', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_binary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('qlinear_unary_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1)] 2025-09-09T14:49:23.9810921Z graph_break [] 2025-09-09T14:49:23.9811186Z PASSED 2025-09-09T14:49:23.9812020Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_dequant_promotion_dynamic_cpu stats [('calls_captured', 27), ('unique_graphs', 8)] 2025-09-09T14:49:23.9812878Z inline_call [] 2025-09-09T14:49:23.9813097Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9813480Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9815519Z inductor [('pattern_matcher_nodes', 18), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 8), ('qlinear_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('qlinear_binary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('dequant_promotion_matcher_nodes', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1)] 2025-09-09T14:49:23.9817377Z graph_break [] 2025-09-09T14:49:23.9817627Z PASSED 2025-09-09T14:49:23.9818482Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_dequant_promotion_int8_mixed_bf16 stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:49:23.9819325Z inline_call [] 2025-09-09T14:49:23.9819555Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9819924Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9822141Z inductor [('pattern_matcher_nodes', 27), ('qlinear_weight_prepack_matcher_nodes', 18), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('dequant_promotion_matcher_nodes', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_binary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('qlinear_unary_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1)] 2025-09-09T14:49:23.9824244Z graph_break [] 2025-09-09T14:49:23.9824481Z PASSED 2025-09-09T14:49:23.9825421Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_dequant_promotion_int8_mixed_bf16_input_dim_exceeds_2 stats [('calls_captured', 22), ('unique_graphs', 8)] 2025-09-09T14:49:23.9826360Z inline_call [] 2025-09-09T14:49:23.9827470Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9827883Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9830088Z inductor [('pattern_matcher_nodes', 40), ('qlinear_weight_prepack_matcher_nodes', 24), ('pattern_matcher_count', 15), ('dequant_promotion_matcher_nodes', 3), ('qlinear_weight_prepack_matcher_count', 3), ('extern_calls', 3), ('qlinear_unary_matcher_nodes', 2), ('qlinear_binary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('dequant_promotion_matcher_count', 1), ('qlinear_unary_matcher_count', 1), ('qlinear_binary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_binary_lower_count', 1), ('qlinear_binary_lower_nodes', 1)] 2025-09-09T14:49:23.9832299Z graph_break [] 2025-09-09T14:49:23.9832570Z PASSED 2025-09-09T14:49:23.9833252Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_gelu_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:49:23.9834010Z inline_call [] 2025-09-09T14:49:23.9834230Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9834684Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:49:23.9836179Z inductor [('pattern_matcher_nodes', 31), ('qlinear_unary_matcher_nodes', 21), ('qlinear_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:49:23.9837532Z graph_break [] 2025-09-09T14:49:23.9837793Z PASSED 2025-09-09T14:49:23.9838597Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_gelu_int8_mixed_bf16 stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:49:23.9839403Z inline_call [] 2025-09-09T14:49:23.9839637Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:49:23.9840003Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2192029Z inductor [('pattern_matcher_nodes', 41), ('qlinear_unary_matcher_nodes', 25), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 8), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2193887Z graph_break [] 2025-09-09T14:50:46.2194630Z PASSED 2025-09-09T14:50:46.2195578Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_input_dim_exceeds_2 stats [('calls_captured', 16), ('unique_graphs', 8)] 2025-09-09T14:50:46.2196415Z inline_call [] 2025-09-09T14:50:46.2196640Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2197021Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2198494Z inductor [('pattern_matcher_nodes', 20), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2199841Z graph_break [] 2025-09-09T14:50:46.2200111Z PASSED 2025-09-09T14:50:46.2200912Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_input_dim_exceeds_2_and_not_contiguous stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2201791Z inline_call [] 2025-09-09T14:50:46.2202023Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2202386Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2205036Z inductor [('pattern_matcher_nodes', 20), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2206543Z graph_break [] 2025-09-09T14:50:46.2206867Z PASSED 2025-09-09T14:50:46.2207647Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_int8_mixed_bf16 stats [('calls_captured', 16), ('unique_graphs', 8)] 2025-09-09T14:50:46.2208510Z inline_call [] 2025-09-09T14:50:46.2208750Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2209184Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2211115Z inductor [('pattern_matcher_nodes', 16), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2212635Z graph_break [] 2025-09-09T14:50:46.2212930Z PASSED 2025-09-09T14:50:46.2213877Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_int8_mixed_bf16_input_dim_exceeds_2 stats [('calls_captured', 16), ('unique_graphs', 8)] 2025-09-09T14:50:46.2214838Z inline_call [] 2025-09-09T14:50:46.2215074Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2215482Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2217084Z inductor [('pattern_matcher_nodes', 24), ('qlinear_weight_prepack_matcher_nodes', 16), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2218554Z graph_break [] 2025-09-09T14:50:46.2218827Z PASSED 2025-09-09T14:50:46.2219878Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_int8_mixed_bf16_input_dim_exceeds_2_and_not_contiguous stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2220918Z inline_call [] 2025-09-09T14:50:46.2221214Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2221581Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2223155Z inductor [('pattern_matcher_nodes', 24), ('qlinear_weight_prepack_matcher_nodes', 16), ('pattern_matcher_count', 9), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_nodes', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2224642Z graph_break [] 2025-09-09T14:50:46.2224895Z PASSED 2025-09-09T14:50:46.2225642Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_mul_cpu stats [('calls_captured', 17), ('unique_graphs', 8)] 2025-09-09T14:50:46.2226420Z inline_call [] 2025-09-09T14:50:46.2226681Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2227055Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2228481Z inductor [('pattern_matcher_nodes', 7), ('qlinear_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qlinear_unary_matcher_nodes', 2), ('qlinear_weight_prepack_matcher_count', 1), ('qlinear_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:50:46.2229899Z graph_break [] 2025-09-09T14:50:46.2230142Z PASSED 2025-09-09T14:50:46.2230825Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_relu_cpu stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2231579Z inline_call [] 2025-09-09T14:50:46.2231796Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2232169Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2233846Z inductor [('pattern_matcher_nodes', 15), ('qlinear_weight_prepack_matcher_nodes', 8), ('pattern_matcher_count', 6), ('qlinear_unary_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2235277Z graph_break [] 2025-09-09T14:50:46.2235542Z PASSED 2025-09-09T14:50:46.2236289Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_relu_input_dim_exceeds_2 stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2237221Z inline_call [] 2025-09-09T14:50:46.2237444Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2237822Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2239315Z inductor [('pattern_matcher_nodes', 23), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 10), ('qlinear_unary_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2240720Z graph_break [] 2025-09-09T14:50:46.2240979Z PASSED 2025-09-09T14:50:46.2241702Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_relu_int8_mixed_bf16 stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2242503Z inline_call [] 2025-09-09T14:50:46.2242721Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2243103Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2245116Z inductor [('pattern_matcher_nodes', 19), ('qlinear_weight_prepack_matcher_nodes', 12), ('pattern_matcher_count', 6), ('qlinear_unary_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2256394Z graph_break [] 2025-09-09T14:50:46.2256880Z PASSED 2025-09-09T14:50:46.2257743Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qlinear_relu_int8_mixed_bf16_input_dim_exceeds_2 stats [('calls_captured', 20), ('unique_graphs', 8)] 2025-09-09T14:50:46.2258623Z inline_call [] 2025-09-09T14:50:46.2258872Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2259247Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2260744Z inductor [('pattern_matcher_nodes', 27), ('qlinear_weight_prepack_matcher_nodes', 16), ('pattern_matcher_count', 10), ('qlinear_unary_matcher_nodes', 5), ('qlinear_weight_prepack_matcher_count', 2), ('qlinear_unary_matcher_count', 2), ('qlinear_unary_lower_count', 2), ('qlinear_unary_lower_nodes', 2), ('extern_calls', 2), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:50:46.2262110Z graph_break [] 2025-09-09T14:50:46.2262360Z PASSED 2025-09-09T14:50:46.2263050Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qmaxpool2d stats [('calls_captured', 19), ('unique_graphs', 8)] 2025-09-09T14:50:46.2263778Z inline_call [] 2025-09-09T14:50:46.2264018Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2264402Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:50:46.2266006Z inductor [('pattern_matcher_nodes', 12), ('qconv_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 4), ('qmaxpool2d_matcher_nodes', 4), ('qconv_unary_matcher_nodes', 3), ('extern_calls', 3), ('qconv_weight_prepack_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qmaxpool2d_matcher_count', 1)] 2025-09-09T14:50:46.2267521Z graph_break [] 2025-09-09T14:50:46.2267766Z PASSED 2025-09-09T14:50:46.2268775Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_bfloat16_per_channel_quant_False_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:50:46.2269781Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:50:46.2270213Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9361455Z inductor [('pattern_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_nodes', 6), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9362421Z graph_break [] 2025-09-09T14:51:11.9363405Z PASSED 2025-09-09T14:51:11.9364775Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_bfloat16_per_channel_quant_False_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9365808Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9366245Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9367493Z inductor [('pattern_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9368634Z graph_break [] 2025-09-09T14:51:11.9368887Z PASSED 2025-09-09T14:51:11.9369775Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9370781Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:51:11.9371208Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9372197Z inductor [('pattern_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_nodes', 6), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9373085Z graph_break [] 2025-09-09T14:51:11.9373334Z PASSED 2025-09-09T14:51:11.9374209Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_bfloat16_per_channel_quant_True_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9375180Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9375624Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9376597Z inductor [('pattern_matcher_nodes', 9), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9377499Z graph_break [] 2025-09-09T14:51:11.9377737Z PASSED 2025-09-09T14:51:11.9378615Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_float32_per_channel_quant_False_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9379608Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:51:11.9380036Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9381025Z inductor [('pattern_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_nodes', 6), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9381953Z graph_break [] 2025-09-09T14:51:11.9382209Z PASSED 2025-09-09T14:51:11.9383185Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_float32_per_channel_quant_False_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9384174Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9384622Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9386029Z inductor [('pattern_matcher_nodes', 10), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 4), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9387149Z graph_break [] 2025-09-09T14:51:11.9387416Z PASSED 2025-09-09T14:51:11.9388333Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_float32_per_channel_quant_True_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9389444Z stats [('calls_captured', 7), ('unique_graphs', 1)] 2025-09-09T14:51:11.9389959Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9390963Z inductor [('pattern_matcher_nodes', 8), ('qlinear_weight_prepack_matcher_nodes', 6), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9392070Z graph_break [] 2025-09-09T14:51:11.9392333Z PASSED 2025-09-09T14:51:11.9393209Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_False_float32_per_channel_quant_True_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9394192Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9394705Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9395713Z inductor [('pattern_matcher_nodes', 9), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 3), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9396600Z graph_break [] 2025-09-09T14:51:11.9396860Z PASSED 2025-09-09T14:51:11.9397724Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_bfloat16_per_channel_quant_False_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9398727Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9399168Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9400284Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9401314Z graph_break [] 2025-09-09T14:51:11.9401554Z PASSED 2025-09-09T14:51:11.9402606Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_bfloat16_per_channel_quant_False_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9403590Z stats [('calls_captured', 14), ('unique_graphs', 1)] 2025-09-09T14:51:11.9404034Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9405368Z inductor [('pattern_matcher_nodes', 13), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 6), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9406586Z graph_break [] 2025-09-09T14:51:11.9406847Z PASSED 2025-09-09T14:51:11.9407702Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9408692Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9409142Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9410439Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9411465Z graph_break [] 2025-09-09T14:51:11.9411715Z PASSED 2025-09-09T14:51:11.9412726Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_bfloat16_per_channel_quant_True_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9413715Z stats [('calls_captured', 14), ('unique_graphs', 1)] 2025-09-09T14:51:11.9414147Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9415270Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9416366Z graph_break [] 2025-09-09T14:51:11.9416619Z PASSED 2025-09-09T14:51:11.9417490Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_float32_per_channel_quant_False_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9418477Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:51:11.9418920Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9420022Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9421044Z graph_break [] 2025-09-09T14:51:11.9421298Z PASSED 2025-09-09T14:51:11.9422151Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_float32_per_channel_quant_False_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:51:11.9423131Z stats [('calls_captured', 14), ('unique_graphs', 1)] 2025-09-09T14:51:11.9423562Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:51:11.9424902Z inductor [('pattern_matcher_nodes', 13), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 6), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:51:11.9426137Z graph_break [] 2025-09-09T14:51:11.9426386Z PASSED 2025-09-09T14:51:11.9427233Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_float32_per_channel_quant_True_dynamic_False frames [('total', 1), ('ok', 1)] 2025-09-09T14:53:12.1019135Z stats [('calls_captured', 10), ('unique_graphs', 1)] 2025-09-09T14:53:12.1020908Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:53:12.1022491Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:53:12.1023925Z graph_break [] 2025-09-09T14:53:12.1024444Z PASSED 2025-09-09T14:53:12.1025630Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_smooth_quant_with_int_mm_has_bias_True_float32_per_channel_quant_True_dynamic_True frames [('total', 1), ('ok', 1)] 2025-09-09T14:53:12.1026952Z stats [('calls_captured', 14), ('unique_graphs', 1)] 2025-09-09T14:53:12.1027523Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:53:12.1029030Z inductor [('pattern_matcher_nodes', 12), ('qlinear_weight_prepack_matcher_nodes', 7), ('pattern_matcher_count', 5), ('removed_pointless_view_pair', 1), ('qlinear_weight_prepack_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('extern_calls', 1)] 2025-09-09T14:53:12.1030423Z graph_break [] 2025-09-09T14:53:12.1030739Z PASSED 2025-09-09T14:53:12.1031536Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_q_attention_block inline_call [] 2025-09-09T14:53:12.1032828Z stats [('calls_captured', 49), ('unique_graphs', 8)] 2025-09-09T14:53:12.1033303Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:53:12.1033780Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:53:12.1036183Z inductor [('pattern_matcher_nodes', 51), ('pattern_matcher_count', 29), ('qlinear_weight_prepack_matcher_nodes', 18), ('qlinear_unary_matcher_nodes', 6), ('extern_calls', 5), ('qlinear_weight_prepack_matcher_count', 3), ('qlinear_unary_matcher_count', 3), ('qlinear_unary_lower_count', 3), ('qlinear_unary_lower_nodes', 3), ('dequant_promotion_matcher_nodes', 2), ('dequant_promotion_matcher_count', 1), ('fxgraph_cache_bypass', 1)] 2025-09-09T14:53:12.1038554Z graph_break [] 2025-09-09T14:53:12.1038965Z aten_mm_info [('aten.bmm_32_384_384_64', 1), ('aten.bmm_32_384_64_384', 1)] 2025-09-09T14:53:12.1039541Z PASSED 2025-09-09T14:53:12.1040515Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_qat_bn_conv2d stats [('calls_captured', 988), ('unique_graphs', 116)] 2025-09-09T14:53:12.1041563Z inline_call [] 2025-09-09T14:53:12.1041865Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:53:12.1042338Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:53:12.1044209Z inductor [('pattern_matcher_nodes', 7), ('qconv_weight_prepack_matcher_nodes', 4), ('pattern_matcher_count', 3), ('qconv_unary_matcher_nodes', 2), ('qconv_weight_prepack_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('extern_calls', 1)] 2025-09-09T14:53:12.1045934Z graph_break [] 2025-09-09T14:53:12.1046175Z PASSED 2025-09-09T14:53:12.1046999Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_qconv2d_maxpool2d_linear_dynamic_cpu stats [('calls_captured', 30), ('unique_graphs', 8)] 2025-09-09T14:53:12.1047870Z inline_call [] 2025-09-09T14:53:12.1048103Z frames [('total', 1), ('ok', 1)] 2025-09-09T14:53:12.1048472Z aot_autograd [('total', 1), ('autograd_cache_bypass', 1), ('ok', 1)] 2025-09-09T14:53:12.1050856Z inductor [('pattern_matcher_nodes', 21), ('pattern_matcher_count', 8), ('qlinear_weight_prepack_matcher_nodes', 4), ('qconv_weight_prepack_matcher_nodes', 4), ('qmaxpool2d_matcher_nodes', 4), ('extern_calls', 4), ('qconv_unary_matcher_nodes', 3), ('qreshape_matcher_nodes', 3), ('qlinear_weight_prepack_matcher_count', 1), ('qconv_weight_prepack_matcher_count', 1), ('qconv_unary_matcher_count', 1), ('fxgraph_cache_bypass', 1), ('qconv_unary_lower_count', 1), ('qconv_unary_lower_nodes', 1), ('qmaxpool2d_matcher_count', 1), ('qreshape_matcher_count', 1), ('qlinear_unary_lower_count', 1), ('qlinear_unary_lower_nodes', 1)] 2025-09-09T14:53:12.1053143Z graph_break [] 2025-09-09T14:53:12.1053402Z PASSED 2025-09-09T14:53:12.1054206Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_adaptive_avg_pool2d_recipe PASSED 2025-09-09T14:53:12.1055443Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_annotate_mul_tensor PASSED 2025-09-09T14:53:12.1056632Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_attention_block PASSED 2025-09-09T14:53:12.1057824Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_avg_pool2d_recipe PASSED 2025-09-09T14:53:12.1058956Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_cat_recipe PASSED 2025-09-09T14:53:12.1060136Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_cat_recipe_same_inputs PASSED 2025-09-09T14:53:12.1061355Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_cat_recipe_single_input PASSED 2025-09-09T14:53:12.1062496Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d PASSED 2025-09-09T14:53:12.1063696Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d_binary PASSED 2025-09-09T14:53:12.1064835Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d_binary2 PASSED 2025-09-09T14:53:12.1066011Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d_binary_unary PASSED 2025-09-09T14:53:12.1067245Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d_serials_binary_unary PASSED 2025-09-09T14:53:12.1068507Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_conv2d_unary PASSED 2025-09-09T14:53:12.1069674Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_dynamic_quant_linear PASSED 2025-09-09T14:53:12.1070868Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_conv2d_recipe PASSED 2025-09-09T14:53:12.1072082Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_linear_recipe PASSED 2025-09-09T14:53:12.1073296Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_maxpool2d_recipe PASSED 2025-09-09T14:53:12.1074554Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_flatten_recipe PASSED 2025-09-09T14:53:12.1075717Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_flatten_recipe2 PASSED 2025-09-09T14:53:12.1076822Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear PASSED 2025-09-09T14:53:12.1077928Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary PASSED 2025-09-09T14:53:12.1079071Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary2 PASSED 2025-09-09T14:53:12.1080240Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_dynamic PASSED 2025-09-09T14:53:12.1081478Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_dynamic_qat PASSED 2025-09-09T14:53:12.1082678Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_qat PASSED 2025-09-09T14:53:12.1083861Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_unary PASSED 2025-09-09T14:53:12.1085110Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_unary_dynamic PASSED 2025-09-09T14:53:12.1086393Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_unary_dynamic_qat PASSED 2025-09-09T14:53:12.1087661Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_unary_qat PASSED 2025-09-09T14:53:12.1088905Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_binary_unary_serials PASSED 2025-09-09T14:53:12.1090133Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_dynamic_fp16 PASSED 2025-09-09T14:53:12.1091289Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_unary PASSED 2025-09-09T14:53:12.1092455Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_unary_dynamic PASSED 2025-09-09T14:53:12.1093679Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_unary_dynamic_qat PASSED 2025-09-09T14:53:12.1094864Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_linear_unary_qat PASSED 2025-09-09T14:53:12.1096096Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_lowering_to_x86 SKIPPED 2025-09-09T14:53:12.1097275Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_maxpool2d_recipe PASSED 2025-09-09T14:53:12.1098393Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_conv2d PASSED 2025-09-09T14:53:12.1099543Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_conv2d_binary PASSED 2025-09-09T14:54:56.2983085Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_conv2d_binary2 PASSED 2025-09-09T14:54:56.2984373Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_conv2d_binary_unary PASSED 2025-09-09T14:54:56.2985616Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_conv2d_unary PASSED 2025-09-09T14:54:56.2986831Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_qat_dynamic_quant_linear PASSED 2025-09-09T14:54:56.2988146Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_and_module_type_case1 PASSED 2025-09-09T14:54:56.2989485Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_and_module_type_case2 PASSED 2025-09-09T14:54:56.2990918Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_and_module_type_with_mixed_configs PASSED 2025-09-09T14:54:56.2992258Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_qconfig PASSED 2025-09-09T14:54:56.2993573Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_qconfig_for_dynamic_quant PASSED 2025-09-09T14:54:56.2995021Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_qconfig_with_underscores PASSED 2025-09-09T14:54:56.2996366Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_with_mixed_configs PASSED 2025-09-09T14:54:56.2997549Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_bmm SKIPPED 2025-09-09T14:54:56.2998730Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity0_sizes0 SKIPPED 2025-09-09T14:54:56.2999991Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity0_sizes1 SKIPPED 2025-09-09T14:54:56.3001258Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity0_sizes2 SKIPPED 2025-09-09T14:54:56.3002514Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity1_sizes0 SKIPPED 2025-09-09T14:54:56.3003780Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity1_sizes1 SKIPPED 2025-09-09T14:54:56.3005028Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_cat_granularity1_sizes2 SKIPPED 2025-09-09T14:54:56.3006292Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_expected_gpu_kernel_fbgemm SKIPPED 2025-09-09T14:54:56.3008043Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3010822Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3012981Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3015131Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3017389Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3019492Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3021606Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3023696Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3025805Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3027945Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3030071Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3032182Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3034280Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3036459Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3038564Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3040669Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3042783Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3044959Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3047059Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3049146Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3051314Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3053433Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3055539Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3057641Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3059778Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3287572Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3289752Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3291940Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3294112Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3296275Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3298439Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3300578Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3302758Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3305084Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3307243Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3309407Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3311772Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3313916Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3316110Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3318258Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3320409Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3322558Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3324703Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3326844Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3328983Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3331150Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3333303Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3335441Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_bfloat16_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3337558Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3339762Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3341869Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3343976Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3346176Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3348281Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3350368Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3352462Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3354618Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3356843Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3358966Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3361076Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3363161Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3598012Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3600113Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3602204Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3604319Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3606647Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3608757Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3610948Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3613553Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3615666Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3617764Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3619868Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_dynamic_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3621992Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3624128Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3626286Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3628463Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3630634Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3632776Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3634992Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3637148Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3639291Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3641463Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3643701Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3645850Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_False_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3648039Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3650160Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3652293Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3654442Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3656584Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3658713Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity0_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3660852Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes0 SKIPPED 2025-09-09T14:54:56.3662971Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_AUTO_sizes1 SKIPPED 2025-09-09T14:54:56.3665114Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes0 SKIPPED 2025-09-09T14:54:56.3667254Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_FBGEMM_sizes1 SKIPPED 2025-09-09T14:54:56.3669411Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes0 SKIPPED 2025-09-09T14:54:56.3671567Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_fp8_linear_variants_float32_mode_weight-only_compile_True_granularity1_kernel_preference_KernelPreference_TORCH_sizes1 SKIPPED 2025-09-09T14:54:56.3673420Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_kernel_preference_numerical_equivalence_granularity0_sizes0 SKIPPED 2025-09-09T14:55:16.7535368Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_kernel_preference_numerical_equivalence_granularity0_sizes1 SKIPPED 2025-09-09T14:55:16.7537381Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_kernel_preference_numerical_equivalence_granularity1_sizes0 SKIPPED 2025-09-09T14:55:16.7538993Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_kernel_preference_numerical_equivalence_granularity1_sizes1 SKIPPED 2025-09-09T14:55:16.7540425Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_moe_weight_reshape_ops SKIPPED 2025-09-09T14:55:16.7541787Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_and_copy_similar_to_vllm_granularity0 SKIPPED 2025-09-09T14:55:16.7543915Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_and_copy_similar_to_vllm_granularity1 SKIPPED 2025-09-09T14:55:16.7545240Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_granularity0 SKIPPED 2025-09-09T14:55:16.7546447Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_granularity1 SKIPPED 2025-09-09T14:55:16.7547757Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_preserves_aliasing_granularity0 SKIPPED 2025-09-09T14:55:16.7549156Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_slice_preserves_aliasing_granularity1 SKIPPED 2025-09-09T14:55:16.7550504Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity0_sizes0 SKIPPED 2025-09-09T14:55:16.7551823Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity0_sizes1 SKIPPED 2025-09-09T14:55:16.7553156Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity0_sizes2 SKIPPED 2025-09-09T14:55:16.7554454Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity1_sizes0 SKIPPED 2025-09-09T14:55:16.7555861Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity1_sizes1 SKIPPED 2025-09-09T14:55:16.7557171Z test/quantization/quantize_/workflows/float8/test_float8_tensor.py::TestFloat8Tensor::test_to_device_granularity1_sizes2 SKIPPED 2025-09-09T14:55:16.7558540Z test/quantization/quantize_/workflows/int4/test_int4_marlin_sparse_tensor.py::TestInt4MarlinSparseTensor::test_linear_config0_sizes0 SKIPPED 2025-09-09T14:55:16.7559994Z test/quantization/quantize_/workflows/int4/test_int4_marlin_sparse_tensor.py::TestInt4MarlinSparseTensor::test_linear_config0_sizes1 SKIPPED 2025-09-09T14:55:16.7561414Z test/quantization/quantize_/workflows/int4/test_int4_marlin_sparse_tensor.py::TestInt4MarlinSparseTensor::test_linear_config0_sizes2 SKIPPED 2025-09-09T14:55:16.7562846Z test/quantization/quantize_/workflows/int4/test_int4_marlin_sparse_tensor.py::TestInt4MarlinSparseTensor::test_module_path_config0 SKIPPED 2025-09-09T14:55:16.7564262Z test/quantization/quantize_/workflows/int4/test_int4_marlin_sparse_tensor.py::TestInt4MarlinSparseTensor::test_to_device_config0 SKIPPED 2025-09-09T14:55:16.7565678Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_bfloat16_group_size_128 PASSED 2025-09-09T14:55:16.7567205Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_bfloat16_group_size_32 PASSED 2025-09-09T14:55:16.7568641Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_bfloat16_group_size_64 PASSED 2025-09-09T14:55:16.7570057Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float16_group_size_128 PASSED 2025-09-09T14:55:16.7571569Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float16_group_size_32 PASSED 2025-09-09T14:55:16.7572997Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float16_group_size_64 PASSED 2025-09-09T14:55:16.7574400Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float32_group_size_128 PASSED 2025-09-09T14:55:16.7575892Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float32_group_size_32 PASSED 2025-09-09T14:55:16.7577294Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes0_float32_group_size_64 PASSED 2025-09-09T14:55:16.7578727Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_bfloat16_group_size_128 PASSED 2025-09-09T14:55:16.7580159Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_bfloat16_group_size_32 PASSED 2025-09-09T14:55:16.7581571Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_bfloat16_group_size_64 PASSED 2025-09-09T14:55:16.7582996Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float16_group_size_128 PASSED 2025-09-09T14:55:16.7584412Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float16_group_size_32 PASSED 2025-09-09T14:55:16.7585818Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float16_group_size_64 PASSED 2025-09-09T14:55:16.7587240Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float32_group_size_128 PASSED 2025-09-09T14:55:16.7588641Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float32_group_size_32 PASSED 2025-09-09T14:55:16.7590051Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes1_float32_group_size_64 PASSED 2025-09-09T14:55:16.7591466Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_bfloat16_group_size_128 PASSED 2025-09-09T14:55:16.7592879Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_bfloat16_group_size_32 PASSED 2025-09-09T14:55:16.7594293Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_bfloat16_group_size_64 PASSED 2025-09-09T14:55:16.7595789Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float16_group_size_128 PASSED 2025-09-09T14:55:16.7597192Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float16_group_size_32 PASSED 2025-09-09T14:55:16.7598603Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float16_group_size_64 PASSED 2025-09-09T14:55:16.7600024Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float32_group_size_128 PASSED 2025-09-09T14:55:16.7601432Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float32_group_size_32 PASSED 2025-09-09T14:55:16.7602949Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_linear_sizes2_float32_group_size_64 PASSED 2025-09-09T14:55:16.7604367Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_module_path_bfloat16 PASSED 2025-09-09T14:55:16.7605655Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_module_path_float16 PASSED 2025-09-09T14:55:16.7606937Z test/quantization/quantize_/workflows/int4/test_int4_opaque_tensor.py::TestInt4OpaqueTensor::test_module_path_float32 PASSED 2025-09-09T14:55:16.7608316Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_bfloat16_group_size_128 SKIPPED 2025-09-09T14:55:16.7609867Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_bfloat16_group_size_32 SKIPPED 2025-09-09T14:55:16.7611523Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_bfloat16_group_size_64 SKIPPED 2025-09-09T14:55:16.7612984Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_float16_group_size_128 SKIPPED 2025-09-09T14:55:16.8034394Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_float16_group_size_32 SKIPPED 2025-09-09T14:55:16.8035955Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes0_float16_group_size_64 SKIPPED 2025-09-09T14:55:16.8037460Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_bfloat16_group_size_128 SKIPPED 2025-09-09T14:55:16.8038930Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_bfloat16_group_size_32 SKIPPED 2025-09-09T14:55:16.8040410Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_bfloat16_group_size_64 SKIPPED 2025-09-09T14:55:16.8041869Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_float16_group_size_128 SKIPPED 2025-09-09T14:55:16.8043336Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_float16_group_size_32 SKIPPED 2025-09-09T14:55:16.8044802Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes1_float16_group_size_64 SKIPPED 2025-09-09T14:55:16.8046272Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_bfloat16_group_size_128 SKIPPED 2025-09-09T14:55:16.8047759Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_bfloat16_group_size_32 SKIPPED 2025-09-09T14:55:16.8049231Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_bfloat16_group_size_64 SKIPPED 2025-09-09T14:55:16.8050688Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_float16_group_size_128 SKIPPED 2025-09-09T14:55:16.8052149Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_float16_group_size_32 SKIPPED 2025-09-09T14:55:16.8053613Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_linear_sizes2_float16_group_size_64 SKIPPED 2025-09-09T14:55:16.8054996Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_module_path_bfloat16 SKIPPED 2025-09-09T14:55:16.8056544Z test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py::Int4PlainInt32Tensor::test_module_path_float16 SKIPPED 2025-09-09T14:55:16.8057891Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_bmm_bmm_config0 SKIPPED 2025-09-09T14:55:16.8059262Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_bmm_bmm_config1 SKIPPED 2025-09-09T14:55:16.8060621Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_linear_config0 SKIPPED 2025-09-09T14:55:16.8062134Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_linear_config1 SKIPPED 2025-09-09T14:55:16.8063525Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_module_path_config0 SKIPPED 2025-09-09T14:55:16.8064942Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_module_path_config1 SKIPPED 2025-09-09T14:55:16.8066327Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_to_device_config0 SKIPPED 2025-09-09T14:55:16.8067706Z test/quantization/quantize_/workflows/int4/test_int4_preshuffled_tensor.py::TestInt4PreshuffledTensor::test_to_device_config1 SKIPPED 2025-09-09T14:55:16.8068996Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_activation_prescaling SKIPPED 2025-09-09T14:55:16.8070086Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_bmm SKIPPED 2025-09-09T14:55:16.8071125Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_cat_sizes0 SKIPPED 2025-09-09T14:55:16.8072176Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_cat_sizes1 SKIPPED 2025-09-09T14:55:16.8073239Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_cat_sizes2 SKIPPED 2025-09-09T14:55:16.8074279Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_linear SKIPPED 2025-09-09T14:55:16.8075451Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_moe_weight_reshape_ops SKIPPED 2025-09-09T14:55:16.8076553Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_slice SKIPPED 2025-09-09T14:55:16.8077689Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_slice_and_copy_similar_to_vllm SKIPPED 2025-09-09T14:55:16.8078924Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_slice_preserves_aliasing SKIPPED 2025-09-09T14:55:16.8080084Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_to_device_sizes0 SKIPPED 2025-09-09T14:55:16.8081196Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_to_device_sizes1 SKIPPED 2025-09-09T14:55:16.8082313Z test/quantization/quantize_/workflows/int4/test_int4_tensor.py::TestInt4Tensor::test_to_device_sizes2 SKIPPED 2025-09-09T14:55:16.8083633Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_cant_initialize_in_cpu SKIPPED 2025-09-09T14:55:16.8085199Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_different_group_sizes_group_size_128 SKIPPED 2025-09-09T14:55:16.8086808Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_different_group_sizes_group_size_32 SKIPPED 2025-09-09T14:55:16.8088465Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_different_group_sizes_group_size_64 SKIPPED 2025-09-09T14:55:16.8089979Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_error_conditions SKIPPED 2025-09-09T14:55:16.8091439Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes0_config0 SKIPPED 2025-09-09T14:55:16.8092906Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes0_config1 SKIPPED 2025-09-09T14:55:16.8094452Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes1_config0 SKIPPED 2025-09-09T14:55:16.8095924Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes1_config1 SKIPPED 2025-09-09T14:55:16.8097382Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes2_config0 SKIPPED 2025-09-09T14:55:16.8098857Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_linear_sizes2_config1 SKIPPED 2025-09-09T14:55:16.8100379Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_mm_int4wo_device_cuda_bfloat16 SKIPPED 2025-09-09T14:55:16.8101878Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_module_path_config0 SKIPPED 2025-09-09T14:55:16.8103340Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_module_path_config1 SKIPPED 2025-09-09T14:55:16.8104876Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_and_copy_similar_to_vllm_config0 SKIPPED 2025-09-09T14:55:16.8106501Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_and_copy_similar_to_vllm_config1 SKIPPED 2025-09-09T14:55:16.8108010Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_config0 SKIPPED 2025-09-09T14:55:16.8109408Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_config1 SKIPPED 2025-09-09T14:55:16.8111139Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_preserves_aliasing_config0 SKIPPED 2025-09-09T14:55:16.8244561Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_slice_preserves_aliasing_config1 SKIPPED 2025-09-09T14:55:16.8246027Z test/quantization/quantize_/workflows/int4/test_int4_tile_packed_to_4d_tensor.py::TestInt4TilePackedTo4dTensor::test_to_device SKIPPED 2025-09-09T14:55:16.8248430Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8251772Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8255300Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8258594Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8261979Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8265286Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8268582Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8271890Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8275305Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8278753Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8282120Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8285428Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8288810Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8292095Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8295475Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8298899Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8302249Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8305554Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8308838Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8312260Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8315660Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8436471Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8440027Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8443335Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8446671Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8449966Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8453324Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8456766Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8460110Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8463400Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8466708Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8469997Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8473415Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8476899Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8480307Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8483616Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8486898Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8490166Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8493498Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8496889Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8500242Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8503550Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8628001Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8631298Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8634672Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8638184Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8641536Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8644875Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8648271Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8651759Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8655175Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8658503Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8661844Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8665231Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8668524Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8671906Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8675268Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8678578Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8681964Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8685378Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8688776Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8692111Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8695435Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8822694Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8826072Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8829664Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8833043Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8836421Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8839742Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8843027Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8846412Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8849842Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8853211Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8856604Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8859890Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8863251Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8866631Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8870087Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8873461Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8876826Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8880158Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8883470Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.8886858Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.8890391Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9013901Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9017390Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9020715Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9024011Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9027401Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9030856Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9034234Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9037627Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9040931Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9044325Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9047713Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9051262Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9054675Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9058061Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9061414Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9064764Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9068122Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9071498Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9074904Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9078344Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9081782Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9205819Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9209273Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9212782Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9216156Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9219515Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9222967Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9226470Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9229906Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9233443Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9236891Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9240370Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9243791Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9247292Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9250721Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9254109Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9257464Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9260820Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9264268Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9267824Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9271263Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9274741Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9394251Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9397686Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9401115Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9404604Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9408048Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9411596Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9414991Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9418480Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9421932Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9425568Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9429005Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9432377Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9435829Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9439196Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9442632Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9446150Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.bfloat16, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9449521Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9452874Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9456144Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9459476Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9462750Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9577398Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9580694Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9583973Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9587330Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9590752Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9594090Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9597539Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9600809Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9604159Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9607512Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9611083Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9614432Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9617730Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9621025Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9624301Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9627618Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9631105Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9634434Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9637849Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9641139Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9644406Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9762840Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9766248Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9769597Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9772898Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9776173Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9779579Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9782921Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9786336Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9789777Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9793043Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9796374Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9799660Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9803015Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9806440Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9809756Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9813285Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9816558Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9819847Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9823275Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9826689Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_auto', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9830069Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9949234Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9952646Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9956179Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_kleidiai', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9959572Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9962863Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9966244Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9969558Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9972911Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9976221Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9979516Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9982834Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9986192Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9989618Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9992964Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:16.9996320Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:16.9999689Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0002978Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0006406Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0009849Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0013376Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0016696Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0142192Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0145517Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0148865Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0152291Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0155895Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0159217Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0162603Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0165911Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0169261Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0172710Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0176070Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0179379Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0182658Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0185961Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0189401Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0192847Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0196331Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0199651Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0202904Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0206222Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0209597Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0325985Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0329373Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0332701Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0336128Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0339448Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0342886Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0346350Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': 'torchao_lowbit', 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0349729Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0353125Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0356543Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0359923Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0363279Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0366629Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0370045Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0373399Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0376891Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0380380Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0383793Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0387167Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0390523Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0393888Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0511087Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0514655Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0518208Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0521590Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0525040Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0528394Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0531821Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0535337Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0538754Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0542123Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0545488Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0548825Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0552294Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0555856Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0559389Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0562770Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0566129Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0569466Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0572889Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.0576382Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.0579795Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3191265Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3196442Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3201006Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3206783Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3212607Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3217823Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3224190Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3243299Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3247893Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3252518Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:17.3257303Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_accuracy_{'model_dtype': torch.float32, 'packing_format': , 'compute_target': None, 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:17.3260687Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_export_compile_aoti SKIPPED 2025-09-09T14:55:17.3262953Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_moe_quant_intx SKIPPED 2025-09-09T14:55:17.3265196Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_serialization_{'packing_format': , 'compute_target': 'aten'} SKIPPED 2025-09-09T14:55:17.3267940Z test/quantization/quantize_/workflows/intx/test_intx_opaque_tensor.py::TestIntxOpaqueTensor::test_serialization_{'packing_format': , 'compute_target': 'torchao_auto'} SKIPPED 2025-09-09T14:55:17.3270351Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_embedding PASSED 2025-09-09T14:55:17.3272512Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_export_int8_dyn_act_intx_weight_config PASSED 2025-09-09T14:55:17.3275079Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_export_int8_dyn_act_intx_weight_config_with_unwrap PASSED 2025-09-09T14:55:17.3277523Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_export_intx_weight_only_config PASSED 2025-09-09T14:55:17.3281096Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.3285592Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.3290009Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.3294362Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.3298704Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.3303036Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.3307810Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4396868Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4401251Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4405973Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4410612Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4415742Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4420787Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4425606Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4430835Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4436130Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4441328Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4445945Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4450321Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4454811Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4459150Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4464086Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4469884Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4475411Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4479851Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4485127Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.4490273Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.4495930Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.4501011Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5600051Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5606530Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5612935Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5619918Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5627476Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5633720Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5641047Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5648206Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5655790Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5662941Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5669537Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5675265Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5681383Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5687408Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5693764Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5700208Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5706426Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5713043Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5719858Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.5727372Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.5732959Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.5738810Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6785652Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6790076Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6794433Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6799209Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6804296Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6808836Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6814212Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6819254Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6823876Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6829064Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6833637Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6838051Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6842405Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6846840Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6851893Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6856692Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6861126Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6865591Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6870655Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.6875583Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.6879896Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.6884224Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7915038Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7919456Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7923827Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7928489Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7933480Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7938382Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7942970Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7947523Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7953895Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7958386Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7962772Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7967120Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7971467Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7975804Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7980366Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.7985425Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.7990294Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.7994963Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.8000346Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.8005424Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.8010598Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.8015472Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9034098Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9038620Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9042959Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9047503Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9052810Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9057644Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9062430Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9067714Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9072782Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9077791Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9082803Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9087610Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9091937Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9096272Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9100766Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9105692Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9110923Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9115900Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9121133Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:17.9125543Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:17.9130360Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:17.9135396Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0163272Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0167700Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0172360Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0177467Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0182415Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0186911Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0192069Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0196822Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0201161Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0206279Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0211285Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0215606Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0219944Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0224458Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0229309Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0234329Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0238732Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0243171Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0248238Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.0253051Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.0257403Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.0261731Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1284715Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1289483Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1293831Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1298411Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1302769Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1307131Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1311753Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1316111Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1320511Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1324842Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1329169Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1333624Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1337939Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1342336Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1347407Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1352263Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1356868Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1362011Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1367079Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.1371937Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.1376681Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.1381742Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2405985Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2410565Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2415091Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2419839Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2424872Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2429343Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2434442Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2439584Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2444216Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2449333Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2454274Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2458593Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2462963Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2467341Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2472195Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2477215Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2481728Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2486788Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2491420Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.2495786Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.2501031Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.2505700Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3535637Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3540158Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3544493Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3549321Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3554456Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3559057Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3564227Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3568819Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3574097Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3579021Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3583352Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3587819Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3592210Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3596692Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3601007Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3605358Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3609846Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3614348Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3618717Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.3623188Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.3627510Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.3631923Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5526075Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5530468Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5534844Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5539189Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5543506Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5547830Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5552131Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5556815Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5561151Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5565611Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5570408Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_intx_unpacked_v2_is_close_to_qdq_v1_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5573839Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_linear PASSED 2025-09-09T14:55:18.5577508Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5581969Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5586385Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5590533Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5594350Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5598186Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5602001Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5606793Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5611447Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.5616325Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.5620730Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.5625098Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8310673Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8314671Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8318455Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8322252Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8326050Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8329849Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8333634Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8337738Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8341537Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8344812Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8349338Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8353703Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8358153Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8362604Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8366940Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8370771Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8374599Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8379172Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8383691Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8387566Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8392071Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8396627Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:18.8401114Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:18.8405540Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:18.8410258Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1081732Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1087278Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1092680Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1097994Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1103295Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1109700Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1115992Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1121694Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1126962Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1132529Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1138035Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1143127Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1148943Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1154699Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1159724Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1165623Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1171417Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1177206Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1182606Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1189011Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1194370Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1199501Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.1205360Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.1211196Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.1216827Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3913124Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3917099Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3920893Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3924695Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3928813Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3932608Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3937120Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3940920Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3944707Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3948481Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3952265Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3956592Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3961066Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3965573Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3970007Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3974516Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3978555Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3982332Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3986186Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.3990173Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.3994778Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.3999256Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.4003790Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.4008244Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.4012844Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6696479Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6700349Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6704500Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6708316Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6712392Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6716253Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6720069Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6723868Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6727663Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6731009Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6735563Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6739989Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6744306Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6748844Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6753270Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6757145Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6761152Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6765583Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6769871Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6774165Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6778557Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6783063Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.6787480Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.6791918Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.6796461Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9530950Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9534831Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9538630Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9542575Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9546379Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9550192Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9553982Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9557849Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9561632Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9565429Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9569186Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9572999Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9577257Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9581676Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9585767Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9590374Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9594704Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9598498Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9602288Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9606064Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9609857Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9613923Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:19.9618208Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:19.9622764Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:19.9627125Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2373805Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2378054Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2381883Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2385685Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2389482Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2393293Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2397141Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2400942Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2404729Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2408534Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2412478Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2416870Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2421322Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2425898Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2430362Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2434904Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2439330Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2443378Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2447177Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2450982Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2454978Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2459485Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.2464014Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.2468530Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.2472938Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5198238Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5202125Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5205976Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5209811Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5213780Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5217573Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5221393Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5225199Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5228986Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5232544Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5235592Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5240202Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5244903Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5249792Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5254184Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5258357Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5262151Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5266289Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5271247Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5276563Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5281681Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5286987Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.5291195Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.5295064Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.5298857Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8022351Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8026232Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8030056Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8033866Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8037798Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8041629Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8045428Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8049249Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8053340Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8057140Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8061062Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8064988Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8069700Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8074299Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8079262Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8083667Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8087767Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8091565Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8095379Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8099378Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8103517Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8108011Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.8113040Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.8117713Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.8122512Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.9099223Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.9102098Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} PASSED 2025-09-09T14:55:20.9104915Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} PASSED 2025-09-09T14:55:20.9107743Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_qat_int8_dyn_act_intx_weight_config_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} PASSED 2025-09-09T14:55:20.9110049Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_serialization_int8_dyn_act_intx_weight_config PASSED 2025-09-09T14:55:20.9111733Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_serialization_intx_weight_only_config PASSED 2025-09-09T14:55:20.9113217Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_slice PASSED 2025-09-09T14:55:20.9114655Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_slice_and_copy_ PASSED 2025-09-09T14:55:20.9116268Z test/quantization/quantize_/workflows/intx/test_intx_unpacked_to_int8_tensor.py::TestIntxUnpackedToInt8Tensor::test_to_dtype PASSED 2025-09-09T14:55:20.9117481Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_concat_linear_cpu_x_dim_2_bias_False SKIPPED 2025-09-09T14:55:20.9118510Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_concat_linear_cpu_x_dim_2_bias_True SKIPPED 2025-09-09T14:55:20.9119542Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_concat_linear_cpu_x_dim_3_bias_False SKIPPED 2025-09-09T14:55:20.9120560Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_concat_linear_cpu_x_dim_3_bias_True SKIPPED 2025-09-09T14:55:20.9121791Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9122983Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9124153Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9125325Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9126477Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9127657Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9128823Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9129970Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_2_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9131146Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9132331Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9133495Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9134678Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9135851Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9137020Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9138180Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9139330Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_bfloat16_x_dim_3_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9140508Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9141681Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9142836Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9143991Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9145138Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9146361Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9147515Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9148652Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_2_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9149805Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9151035Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9152184Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9153339Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9154482Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9155731Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9156888Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9158021Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float16_x_dim_3_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9159186Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9160355Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9161509Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9162683Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9163843Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9164992Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9166147Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9167283Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_2_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9168452Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_False_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9169632Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_False_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9333939Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_False_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9335106Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_False_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9336277Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_True_bs_160_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9337432Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_True_bs_160_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9338728Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_True_bs_1_sym_quant_a_False SKIPPED 2025-09-09T14:55:20.9339882Z test/quantization/test_da8w4_cpu.py::TestDa8w4Cpu::test_8da4w_cpu_float32_x_dim_3_bias_True_bs_1_sym_quant_a_True SKIPPED 2025-09-09T14:55:20.9340876Z test/quantization/test_gptq.py::TestGPTQ::test_gptq_quantizer_int4_weight_only SKIPPED 2025-09-09T14:55:20.9341778Z test/quantization/test_gptq.py::TestMultiTensorFlow::test_multitensor_add_tensors SKIPPED 2025-09-09T14:55:20.9342726Z test/quantization/test_gptq.py::TestMultiTensorFlow::test_multitensor_inplace_operation SKIPPED 2025-09-09T14:55:20.9343794Z test/quantization/test_gptq.py::TestMultiTensorFlow::test_multitensor_pad_unpad SKIPPED 2025-09-09T14:55:20.9344780Z test/quantization/test_gptq.py::TestMultiTensorInputRecorder::test_gptq_with_input_recorder SKIPPED 2025-09-09T14:55:20.9345825Z test/quantization/test_gptq.py::TestMultiTensorInputRecorder::test_multitensor_input_recorder SKIPPED 2025-09-09T14:55:20.9347085Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_aten SKIPPED 2025-09-09T14:55:20.9348501Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_kleidiai SKIPPED 2025-09-09T14:55:20.9351231Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9355177Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9359016Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9362850Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9366723Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9370694Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9374640Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9378501Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9382389Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9386214Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9390109Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9394062Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9398049Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9401877Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9405710Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9484596Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9488519Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9492583Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9496476Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9500311Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9504141Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9507969Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9512000Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9516037Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9520008Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9523845Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9527747Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9531578Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9535464Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9539397Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9543280Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9547121Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9550953Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9554876Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9630488Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9634645Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9638515Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9642343Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9646175Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9650017Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9653911Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9657864Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9661827Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9665657Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9669545Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9673369Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9677316Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9681278Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9685201Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9689110Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9693010Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9696960Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9700932Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9777043Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int1, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9780993Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9784899Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9788791Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9792686Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9796675Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9800697Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int2, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9804732Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9808616Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9812733Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9816643Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9820575Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9824584Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int3, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9828539Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9832428Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9836371Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9840359Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9844288Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9848374Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int4, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9926455Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9930400Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9934282Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9938157Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9942116Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9946133Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int5, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9950184Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9954078Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9958077Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9961956Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9965913Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9969944Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int6, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9973855Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9977754Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9981636Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9985560Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9989513Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:20.9993589Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int7, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:20.9997573Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:21.0148206Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:21.0152125Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:21.0156032Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:21.0160002Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerAxis(axis=0)} SKIPPED 2025-09-09T14:55:21.0164024Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_accuracy_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.UNIVERSAL), 'weight_dtype': torch.int8, 'weight_mapping_type': , 'weight_granularity': PerGroup(group_size=128)} SKIPPED 2025-09-09T14:55:21.0166770Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_export_QDQLayout SKIPPED 2025-09-09T14:55:21.0168524Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_export_compile_aoti_PackedLinearInt8DynamicActivationIntxWeightLayout SKIPPED 2025-09-09T14:55:21.0170518Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_export_dynamic_shape_PackedLinearInt8DynamicActivationIntxWeightLayout SKIPPED 2025-09-09T14:55:21.0172760Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0175261Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0177697Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0180123Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0182526Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0184941Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0187345Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0189743Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0192149Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 128, 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0194617Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0197035Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0199418Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0201905Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0204305Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0206695Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0209120Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0211674Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0214069Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 32, 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0216461Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0218871Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0221267Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0331215Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0333629Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0336028Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0338425Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0340832Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0343321Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynActInt4WeightQATQuantizer_{'group_size': 64, 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0345933Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynamicActivationInt4WeightConfig_{'group_size': 32, 'mapping_type': , 'act_mapping_type': } SKIPPED 2025-09-09T14:55:21.0348685Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_Int8DynamicActivationInt4WeightConfig_{'group_size': 64, 'mapping_type': , 'act_mapping_type': } SKIPPED 2025-09-09T14:55:21.0351895Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0355445Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0358910Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0362409Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0365851Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0369291Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0372755Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0376228Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0379744Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0383221Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0386728Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0390193Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0393647Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0397146Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0400581Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0501438Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0504902Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0508369Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0512111Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0515623Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0519183Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0522628Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0526091Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0529544Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0532996Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0536447Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0539882Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0543353Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0546871Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0550311Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0553820Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0557320Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0560757Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0564196Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0567629Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0571088Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0670581Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0674055Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0677678Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0681165Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0684699Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0688140Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0691592Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0695053Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0698498Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0701956Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0705416Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0708858Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0712512Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0715996Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0719519Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0722960Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0726396Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0729847Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0733293Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0736763Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0740209Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0833927Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0837580Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0841021Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0844564Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0848009Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0851435Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0854882Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0858314Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0861757Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0865185Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0868613Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0872065Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0875549Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0879051Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0882459Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0885914Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0889371Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0892844Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0896310Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0899763Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0903199Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0989371Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.0992849Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.0996398Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.0999856Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1003312Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1006771Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1010310Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1013771Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1017209Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1020645Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1024145Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1027570Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1031088Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1034564Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1038005Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1041436Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1044855Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1048305Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1051741Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1055257Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1058681Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1151314Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1154906Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1158355Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1161806Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1165252Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1168650Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1172080Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1175493Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1178996Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int1, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1182443Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1185906Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1199769Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1203247Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1206724Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1210333Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1213799Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1217276Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1220725Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1224340Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1227783Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1231236Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1318379Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1321859Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1325293Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1328756Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1332204Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1335654Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1339102Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1342656Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1346116Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1349571Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1353114Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1356627Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1360072Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1363537Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1366985Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1370433Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1373887Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1377391Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1380841Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1384281Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1387762Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1483451Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1486907Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1490321Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1493780Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1497229Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1500687Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1504239Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1507710Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1511277Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1514887Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1518360Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1521814Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1525274Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1528744Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1532206Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1535626Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1539141Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1542561Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1546067Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1549497Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1552926Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1643878Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1647341Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1650784Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1654239Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1657678Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1661224Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1664655Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1668176Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1671615Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1675125Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1678573Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1681991Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1685440Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1688864Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1692308Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1695788Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1699200Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1702673Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1706132Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1709612Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1713192Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1808873Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1812477Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1815924Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1819375Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1822963Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1826423Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1829951Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1833413Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1836925Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1840377Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1843826Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1847264Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1850710Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1854142Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1857622Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1861082Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1864573Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1868021Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1871467Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1874969Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1878411Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1969715Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1973149Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1976595Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1980152Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1983580Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1987095Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.1990515Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.1993947Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.1997446Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2000894Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2004310Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2007702Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int2, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2011279Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2014837Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2018305Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2021852Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2025311Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2028742Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2032192Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2035704Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2039164Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2131959Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2135413Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2138986Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2142443Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2145972Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2149435Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2152874Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2156426Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2159887Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2163355Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2166814Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2170248Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2173758Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2177220Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2180728Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2184172Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2187616Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2191070Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2194508Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2198023Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2201481Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2292275Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2295842Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2299266Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2302765Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2306203Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2309608Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2313222Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2316756Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2320218Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2323666Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2327118Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2330635Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2334096Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2337658Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2341087Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2344535Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2347983Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2351427Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2354943Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2358384Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2361803Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2453810Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2457286Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2460808Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2464247Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2467686Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2471140Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2474646Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2478113Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2481543Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2484973Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2488449Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2491882Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2495373Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2498798Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2502229Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2505656Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2509063Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2512612Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2516082Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2519498Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2523009Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2615489Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2619058Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2622514Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2625950Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2629394Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2632821Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2636333Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2639792Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2643226Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2646729Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2650179Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2653686Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2657100Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2660513Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2663951Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2667388Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2670824Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2674271Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2677748Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2681243Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2684704Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2777620Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2781150Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2784586Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2788043Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2791490Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2795013Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2798479Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2801937Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2805501Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2808931Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2812739Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2816189Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2819632Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2823039Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2826429Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int3, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2829896Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2833368Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2836899Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2840952Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2844422Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2847957Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2940212Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2943699Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2947171Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2950637Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2954092Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2957590Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2961034Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2964618Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2968053Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2971576Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2975023Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2978453Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2981908Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2985384Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2988844Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.2992287Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.2995772Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.2999283Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3002731Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3006212Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3009657Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3101891Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3105370Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3108833Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3112416Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3115921Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3119393Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3122933Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3126389Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3129873Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3133316Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3136769Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3140218Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3143670Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3147123Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3150559Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3154008Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3157558Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3160997Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3164514Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3167955Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3171416Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3261133Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3264570Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3268011Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3271461Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3274952Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3278496Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3281951Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3285515Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3288962Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3292409Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3295849Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3299289Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3302716Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3306154Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3309607Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3313221Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3316709Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3320239Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3323666Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3327106Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3330528Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3422873Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3426329Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3429760Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3433310Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3436845Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3440306Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3443845Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3447285Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3450717Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3454147Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3457603Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3461042Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3464486Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3467974Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3471427Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3474905Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3478419Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3481844Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3485287Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3488733Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3492146Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3581894Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3585353Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3588923Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3592374Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3595864Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3599383Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3602822Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3606265Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3609715Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3613331Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3616785Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3620242Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3623797Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3627216Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3630644Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3634113Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3637611Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3641042Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int4, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3644474Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3647956Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3651433Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3742352Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3745919Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3749376Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3752849Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3756429Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3759879Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3763334Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3766784Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3770213Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3773690Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3777133Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3780628Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3784100Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3787523Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3791051Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3794503Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3798029Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3801478Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3804919Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3808361Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3811950Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3904007Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3907663Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3911395Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3915000Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3918546Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3922069Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3925596Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3929078Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3932570Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3936022Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3939714Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3943139Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3946825Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3950278Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3953805Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3957452Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3960963Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3964490Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.3968003Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.3971529Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.3975114Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4064851Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4068497Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4072048Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4075624Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4079154Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4082673Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4086218Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4089700Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4093146Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4096769Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4100302Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4103895Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4107327Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4111073Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4114600Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4118115Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4121647Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4125143Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4128679Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4132284Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4135807Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4228386Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4231890Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4235448Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4238954Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4242469Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4245955Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4249502Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4253014Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4256665Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4260129Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4263874Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4267333Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4270921Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4274383Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4277986Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4281545Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4285087Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4288616Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4292207Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4295738Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4299230Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4388403Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4391922Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4395422Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4398966Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4402492Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4406022Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4409533Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4413307Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4416816Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4420438Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4423932Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4427443Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4430865Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4434365Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4441024Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4444548Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4448049Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4451679Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4455164Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4458705Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4462190Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int5, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4551550Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4555168Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4558725Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4562417Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4565940Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4569473Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4573096Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4576633Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4580257Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4583713Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4587325Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4590769Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4594286Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4597959Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4601468Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4604990Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4608579Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4612293Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4615845Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4619443Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4622913Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4713097Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4716704Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4720318Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4723831Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4727373Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4730994Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4734560Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4738128Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4741650Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4745089Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4748667Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4752098Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4755695Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4759223Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4762737Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4766343Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4769873Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4773514Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4777005Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4780518Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4784036Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4872302Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4876011Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4879533Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4883057Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4886670Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4890189Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4893687Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4897190Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4900713Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4904226Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4907735Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4911432Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4915003Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4918548Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4922156Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4925701Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4929267Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4932794Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.4936257Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.4939770Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.4943318Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5033702Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5037270Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5040804Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5044420Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5047909Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5051457Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5054953Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5058410Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5061968Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5065402Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5069076Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5072516Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5076100Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5079720Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5083216Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5086814Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5090329Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5093844Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5097309Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5100905Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5104410Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5194863Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5198414Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5202041Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5205545Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5209140Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5212830Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5216365Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5219822Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5223404Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5226915Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5230499Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5233934Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5237601Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5241140Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5244690Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5248207Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5251721Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5255234Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5258719Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5262282Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5265756Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5356939Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5360586Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5364094Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int6, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5367668Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5371239Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5374774Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5378333Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5381852Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5385448Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5388981Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5392433Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5396147Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5399686Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5403280Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5406789Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5410461Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5413995Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5417531Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5421109Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5424634Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5428083Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5518620Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5522183Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5525794Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5529318Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5532859Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5536388Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5539932Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5543514Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5547034Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5550626Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5554170Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5557690Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5561336Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5567031Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5570627Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5574148Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5577699Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5581279Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5584818Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5588373Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5591832Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5678113Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5681722Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5685346Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5688885Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5692341Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5695873Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5699446Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5702967Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5706526Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5710208Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5713718Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5717374Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5720958Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5724505Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5728008Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5731543Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5735125Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5738654Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5742153Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5745758Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5749195Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5839809Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5843391Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5846933Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5850453Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5853984Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5857566Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5861093Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5864670Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5868203Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5871648Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5875334Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5878807Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5882324Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5885878Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5889421Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5893008Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5896454Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5900081Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.5903542Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.5907138Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.5910766Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6001581Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6005248Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6008786Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6012439Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6016064Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6019593Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6023171Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6026696Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6030204Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6033782Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6037451Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6040969Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6044513Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6048029Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6051582Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6055101Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6058642Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6062145Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6065678Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6069201Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6072766Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6155364Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6158889Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6162410Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6166015Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6169548Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6173083Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int7, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6176606Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6180181Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6183735Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6187257Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6190776Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6194306Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6197925Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6201513Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6205059Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6208646Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6212298Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6215931Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6219468Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6223057Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6226585Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6310039Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6313597Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6317286Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6320828Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6324414Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6327918Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6331517Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6335012Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6338595Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6342143Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6345656Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6349181Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6352744Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6356320Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6359893Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6363408Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6366942Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6370440Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6373934Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6377464Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6380944Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 128, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6463224Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6467129Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6470590Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6474328Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6478070Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6481667Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6485223Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6488834Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6492355Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6495911Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6499418Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6502987Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6506492Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6510213Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6513725Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6517377Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6520911Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6524498Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6528027Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6531574Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6535063Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6619368Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6622912Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6626516Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6630018Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6633586Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6637127Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6640687Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6644212Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6647722Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6651293Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6654780Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6658271Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6661791Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6665224Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6668763Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 32, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6672295Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6675948Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6679489Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6683014Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6686617Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6690095Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6773925Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6777746Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6781389Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6784986Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6788537Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6792093Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6795705Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6799208Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6802801Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6806297Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6809812Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6813518Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6817057Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6833778Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6837443Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6841102Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6844647Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6848158Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:21.6851679Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:21.6855230Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:21.6858729Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:22.3431667Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:22.3435311Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:22.3438878Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.bfloat16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:22.3442355Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:22.3445831Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:22.3449261Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float16, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:22.3452686Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.bfloat16} SKIPPED 2025-09-09T14:55:22.3456180Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float16} SKIPPED 2025-09-09T14:55:22.3459611Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_identical_to_IntXQuantizationAwareTrainingConfig_{'weight_dtype': torch.int8, 'group_size': 64, 'mapping_type': , 'act_mapping_type': , 'scale_dtype': torch.float32, 'model_dtype': torch.float32} SKIPPED 2025-09-09T14:55:22.3461935Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_moe_quant_intx SKIPPED 2025-09-09T14:55:22.3464133Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_serialization_{'layout': PackedLinearInt8DynamicActivationIntxWeightLayout(group_size=None, bit_width=None, has_weight_zeros=None, has_bias=None, target=Target.AUTO)} SKIPPED 2025-09-09T14:55:22.3466353Z test/quantization/test_int8_dynamic_activation_intx_weight_config_v1.py::TestInt8DynamicActivationIntxWeight::test_serialization_{'layout': QDQLayout()} SKIPPED 2025-09-09T14:55:22.3467644Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8dq_base_0_single_token SKIPPED 2025-09-09T14:55:22.3468640Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8dq_base_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3469656Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8dq_fake_dim_0_single_token SKIPPED 2025-09-09T14:55:22.3470709Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8dq_fake_dim_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3471720Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8wo_base_0_single_token SKIPPED 2025-09-09T14:55:22.3472708Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8wo_base_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3473708Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8wo_fake_dim_0_single_token SKIPPED 2025-09-09T14:55:22.3474790Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_fp8wo_fake_dim_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3475787Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int4wo_base_0_single_token SKIPPED 2025-09-09T14:55:22.3476795Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int4wo_base_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3477817Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int4wo_fake_dim_0_single_token SKIPPED 2025-09-09T14:55:22.3478895Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int4wo_fake_dim_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3479925Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8dq_base_0_multiple_tokens SKIPPED 2025-09-09T14:55:22.3480948Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8dq_fake_dim_0_multiple_tokens SKIPPED 2025-09-09T14:55:22.3481960Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_base_0_single_token SKIPPED 2025-09-09T14:55:22.3482983Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_base_1_multiple_tokens SKIPPED 2025-09-09T14:55:22.3484043Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_base_cpu_0_single_token cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3485252Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 33, in forward 2025-09-09T14:55:22.3486033Z scores = self.router(x) # [T, E] 2025-09-09T14:55:22.3486253Z 2025-09-09T14:55:22.3486413Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3487204Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 34, in forward 2025-09-09T14:55:22.3487969Z scores = F.softmax(scores, dim=-1) 2025-09-09T14:55:22.3488179Z 2025-09-09T14:55:22.3488349Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3489127Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 34, in forward 2025-09-09T14:55:22.3489877Z scores = F.softmax(scores, dim=-1) 2025-09-09T14:55:22.3490086Z 2025-09-09T14:55:22.3490202Z cudagraph partition due to non gpu ops 2025-09-09T14:55:22.3490585Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3491369Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 35, in forward 2025-09-09T14:55:22.3492121Z scores, expert_indices = torch.topk( 2025-09-09T14:55:22.3492341Z 2025-09-09T14:55:22.3492513Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3493328Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 35, in forward 2025-09-09T14:55:22.3494082Z scores, expert_indices = torch.topk( 2025-09-09T14:55:22.3494301Z 2025-09-09T14:55:22.3494459Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3495251Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 38, in forward 2025-09-09T14:55:22.3496082Z scores /= scores.sum(dim=-1, keepdim=True).to(x.dtype) # [T, A] 2025-09-09T14:55:22.3496385Z 2025-09-09T14:55:22.3496576Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3497369Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:22.3498167Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:22.3498977Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 99, in forward 2025-09-09T14:55:22.3499722Z y1 = F.silu(F.linear(x, w1[index])) 2025-09-09T14:55:22.3499929Z 2025-09-09T14:55:22.3500085Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:22.3500873Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:22.3501686Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:22.3502489Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 100, in forward 2025-09-09T14:55:22.3503257Z y3 = F.linear(x, w3[index]) 2025-09-09T14:55:22.3503438Z 2025-09-09T14:55:34.8483173Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8484339Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8485469Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8486643Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 103, in forward 2025-09-09T14:55:34.8487489Z cur_out = F.linear(y1 * y3, y2) 2025-09-09T14:55:34.8487703Z 2025-09-09T14:55:34.8487884Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8488766Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8489873Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8490903Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 99, in forward 2025-09-09T14:55:34.8492120Z y1 = F.silu(F.linear(x, w1[index])) 2025-09-09T14:55:34.8492332Z 2025-09-09T14:55:34.8492490Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8493298Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8494109Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8494904Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 100, in forward 2025-09-09T14:55:34.8495646Z y3 = F.linear(x, w3[index]) 2025-09-09T14:55:34.8495829Z 2025-09-09T14:55:34.8495987Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8496781Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8497582Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8498390Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 103, in forward 2025-09-09T14:55:34.8499129Z cur_out = F.linear(y1 * y3, y2) 2025-09-09T14:55:34.8499409Z 2025-09-09T14:55:34.8499569Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8500362Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8501156Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8501964Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 95, in forward 2025-09-09T14:55:34.8502769Z w2 = self.w2[expert_indices] 2025-09-09T14:55:34.8502957Z 2025-09-09T14:55:34.8503114Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8503909Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8504703Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8505501Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 103, in forward 2025-09-09T14:55:34.8506250Z cur_out = F.linear(y1 * y3, y2) 2025-09-09T14:55:34.8506446Z 2025-09-09T14:55:34.8506561Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8506943Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8507719Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8508533Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8509391Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 103, in forward 2025-09-09T14:55:34.8510439Z cur_out = F.linear(y1 * y3, y2) 2025-09-09T14:55:34.8510638Z 2025-09-09T14:55:34.8510766Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8511135Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8511930Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8512726Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8513537Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 108, in forward 2025-09-09T14:55:34.8514345Z (torch.cat(outs, dim=0) * expert_weights.view(-1, 1)) 2025-09-09T14:55:34.8514688Z 2025-09-09T14:55:34.8514847Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8515651Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 40, in forward 2025-09-09T14:55:34.8516511Z out = self.experts(x, expert_indices, scores, self.top_k) 2025-09-09T14:55:34.8517322Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 108, in forward 2025-09-09T14:55:34.8518127Z (torch.cat(outs, dim=0) * expert_weights.view(-1, 1)) 2025-09-09T14:55:34.8518391Z 2025-09-09T14:55:34.8518684Z PASSED 2025-09-09T14:55:34.8519427Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_base_cpu_1_multiple_tokens cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8520629Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 33, in forward 2025-09-09T14:55:34.8521386Z scores = self.router(x) # [T, E] 2025-09-09T14:55:34.8521594Z 2025-09-09T14:55:34.8521725Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8522090Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8522880Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 35, in forward 2025-09-09T14:55:34.8523619Z scores, expert_indices = torch.topk( 2025-09-09T14:55:34.8523848Z 2025-09-09T14:55:34.8524065Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8524835Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 35, in forward 2025-09-09T14:55:34.8525586Z scores, expert_indices = torch.topk( 2025-09-09T14:55:34.8525801Z 2025-09-09T14:55:34.8525968Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8526737Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 38, in forward 2025-09-09T14:55:34.8527615Z scores /= scores.sum(dim=-1, keepdim=True).to(x.dtype) # [T, A] 2025-09-09T14:55:34.8527919Z 2025-09-09T14:55:34.8528034Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8528410Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8529205Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 125, in forward 2025-09-09T14:55:34.8530017Z expert_indices.view(-1) + 1, minlength=self.num_experts + 1 2025-09-09T14:55:34.8530329Z 2025-09-09T14:55:34.8530485Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8531267Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 117, in forward 2025-09-09T14:55:34.8532094Z ordered_token_activations = expert_indices.view(-1).argsort( 2025-09-09T14:55:34.8532394Z 2025-09-09T14:55:34.8532558Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8533387Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 124, in forward 2025-09-09T14:55:34.8534149Z num_tokens_per_expert = torch.bincount( 2025-09-09T14:55:34.8534373Z 2025-09-09T14:55:34.8534529Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8535322Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 124, in forward 2025-09-09T14:55:34.8536081Z num_tokens_per_expert = torch.bincount( 2025-09-09T14:55:34.8536304Z 2025-09-09T14:55:34.8536458Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8537257Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 134, in forward 2025-09-09T14:55:34.8538068Z cum_tokens_per_expert = num_tokens_per_expert.cumsum(0).to( 2025-09-09T14:55:34.8538372Z 2025-09-09T14:55:34.8538491Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8538830Z cudagraph partition due to non gpu ops 2025-09-09T14:55:34.8539229Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8540021Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 134, in forward 2025-09-09T14:55:34.8540835Z cum_tokens_per_expert = num_tokens_per_expert.cumsum(0).to( 2025-09-09T14:55:34.8541146Z 2025-09-09T14:55:34.8541301Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8542078Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 121, in forward 2025-09-09T14:55:34.8542915Z ordered_token_activations.div(top_k).floor().to(torch.int64) 2025-09-09T14:55:34.8543219Z 2025-09-09T14:55:34.8543863Z W0909 14:55:33.860485 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/0] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:34.8544759Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8545685Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:34.8546564Z tokens_grouped_by_expert = [ 2025-09-09T14:55:34.8547322Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:34.8548138Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:34.8548394Z 2025-09-09T14:55:34.8548550Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8549467Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:34.8550350Z tokens_grouped_by_expert = [ 2025-09-09T14:55:34.8551109Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:34.8551913Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:34.8552168Z 2025-09-09T14:55:34.8552325Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:34.8553251Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0178314Z tokens_grouped_by_expert = [ 2025-09-09T14:55:38.0179779Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:38.0181308Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:38.0181792Z 2025-09-09T14:55:38.0182084Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0184144Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0185731Z tokens_grouped_by_expert = [ 2025-09-09T14:55:38.0187038Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:38.0188451Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:38.0188917Z 2025-09-09T14:55:38.0189214Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0190874Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0192444Z final_out = final_out.scatter_add( 2025-09-09T14:55:38.0192814Z 2025-09-09T14:55:38.0193086Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0194802Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0196601Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0196979Z 2025-09-09T14:55:38.0197258Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0198948Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0200567Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0200888Z 2025-09-09T14:55:38.0201196Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0202919Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0204549Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0204997Z 2025-09-09T14:55:38.0205300Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0207018Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0208710Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0209139Z 2025-09-09T14:55:38.0209430Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0211658Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0213238Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0213612Z 2025-09-09T14:55:38.0213892Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0215523Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0217095Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0217566Z 2025-09-09T14:55:38.0217861Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0219531Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0221186Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0221593Z 2025-09-09T14:55:38.0221887Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0223459Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0224962Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0225373Z 2025-09-09T14:55:38.0225661Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0227183Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0228935Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0229293Z 2025-09-09T14:55:38.0229561Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0231098Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0232617Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0232929Z 2025-09-09T14:55:38.0233208Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0234858Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0236405Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0236830Z 2025-09-09T14:55:38.0237087Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0238699Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0240421Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0240812Z 2025-09-09T14:55:38.0241094Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0242655Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0244207Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0244569Z 2025-09-09T14:55:38.0244857Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0246489Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0248022Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0248334Z 2025-09-09T14:55:38.0248595Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0250177Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0251755Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0252176Z 2025-09-09T14:55:38.0252447Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0254185Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0255716Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0256156Z 2025-09-09T14:55:38.0256444Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0258180Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0259817Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0260148Z 2025-09-09T14:55:38.0260411Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0261986Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0263564Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0263892Z 2025-09-09T14:55:38.0264196Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0265890Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0267579Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0268009Z 2025-09-09T14:55:38.0268300Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0270153Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0271774Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0272189Z 2025-09-09T14:55:38.0272463Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0274161Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0275842Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:38.0276213Z 2025-09-09T14:55:38.0276493Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0278220Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0279831Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:38.0280148Z 2025-09-09T14:55:38.0280440Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0282130Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0283894Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0284320Z 2025-09-09T14:55:38.0284618Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0286336Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0288021Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:38.0288456Z 2025-09-09T14:55:38.0288672Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0289291Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0289884Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0290481Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0291068Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0291662Z cudagraph partition due to non gpu ops 2025-09-09T14:55:38.0292341Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0294024Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0295691Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:55:38.0296257Z 2025-09-09T14:55:38.0296514Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:38.0298160Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:38.0299823Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:55:38.0300274Z 2025-09-09T14:55:38.0300480Z cudagraph partition due to non gpu ops 2025-09-09T14:55:39.5692701Z W0909 14:55:38.016848 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/1] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:39.5694782Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5696555Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5698171Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5699427Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5700892Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5701321Z 2025-09-09T14:55:39.5701588Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5703244Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5704876Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5706376Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5707756Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5708163Z 2025-09-09T14:55:39.5708396Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5710360Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5711938Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5713165Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5714643Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5715114Z 2025-09-09T14:55:39.5715395Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5717084Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5718866Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5720219Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5721707Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5722182Z 2025-09-09T14:55:39.5722464Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5724154Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5725756Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5727118Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5728609Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5729083Z 2025-09-09T14:55:39.5729369Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5731071Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5732827Z tokens_grouped_by_expert = [ 2025-09-09T14:55:39.5734180Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:39.5735664Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:39.5736137Z 2025-09-09T14:55:39.5736424Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5738080Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5739804Z final_out = final_out.scatter_add( 2025-09-09T14:55:39.5740215Z 2025-09-09T14:55:39.5740494Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5742205Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5743812Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:39.5744192Z 2025-09-09T14:55:39.5744472Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5746140Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5747754Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:39.5748077Z 2025-09-09T14:55:39.5748369Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5750184Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5751834Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5752256Z 2025-09-09T14:55:39.5752543Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5754261Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5756014Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5756430Z 2025-09-09T14:55:39.5756718Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5758398Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5760012Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:39.5760390Z 2025-09-09T14:55:39.5760679Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5762604Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5764193Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:39.5764528Z 2025-09-09T14:55:39.5764809Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5766519Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5768168Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5768587Z 2025-09-09T14:55:39.5768886Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5770580Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5772218Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5772657Z 2025-09-09T14:55:39.5772941Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5774617Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5776233Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:39.5776596Z 2025-09-09T14:55:39.5776985Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5778716Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5780301Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:39.5780626Z 2025-09-09T14:55:39.5780910Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5782624Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5784354Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5784796Z 2025-09-09T14:55:39.5785082Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5786738Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5788388Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5788818Z 2025-09-09T14:55:39.5789114Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5790813Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5792443Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:39.5792801Z 2025-09-09T14:55:39.5793071Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5794953Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5796566Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:39.5796883Z 2025-09-09T14:55:39.5797167Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5798846Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5800471Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5800913Z 2025-09-09T14:55:39.5801196Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5802904Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5804550Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5804997Z 2025-09-09T14:55:39.5805279Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5807081Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5808707Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:39.5809069Z 2025-09-09T14:55:39.5809370Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5811267Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5812871Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:39.5813190Z 2025-09-09T14:55:39.5813477Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:39.5815205Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:39.5816856Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:39.5817283Z 2025-09-09T14:55:39.5817571Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1769220Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1770837Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1771132Z 2025-09-09T14:55:44.1771379Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1772644Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1773656Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1773884Z 2025-09-09T14:55:44.1774080Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1775235Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1776229Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1776441Z 2025-09-09T14:55:44.1776623Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1777689Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1778728Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1779010Z 2025-09-09T14:55:44.1779170Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1780198Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1781095Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1781330Z 2025-09-09T14:55:44.1781551Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1782477Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1783331Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1783547Z 2025-09-09T14:55:44.1783706Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1784639Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1785507Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1785682Z 2025-09-09T14:55:44.1785839Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1786757Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1787656Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1787938Z 2025-09-09T14:55:44.1788094Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1789020Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1789902Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1790149Z 2025-09-09T14:55:44.1790306Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1791230Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1792098Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1792315Z 2025-09-09T14:55:44.1792473Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1793382Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1794245Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1794422Z 2025-09-09T14:55:44.1794667Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1795630Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1796531Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1796765Z 2025-09-09T14:55:44.1796923Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1797838Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1798715Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1798944Z 2025-09-09T14:55:44.1799093Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1799437Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1799762Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1800091Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1800410Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1800734Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1801067Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1801384Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1801756Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1802668Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1803586Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:55:44.1803837Z 2025-09-09T14:55:44.1803993Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1804951Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1805866Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:55:44.1806116Z 2025-09-09T14:55:44.1806229Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1807095Z W0909 14:55:42.680759 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/2] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:44.1807974Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1808904Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1809780Z tokens_grouped_by_expert = [ 2025-09-09T14:55:44.1810802Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:44.1811624Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:44.1811964Z 2025-09-09T14:55:44.1812122Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1813041Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1813911Z tokens_grouped_by_expert = [ 2025-09-09T14:55:44.1814638Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:44.1815451Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:44.1815705Z 2025-09-09T14:55:44.1815860Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1816777Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1817654Z tokens_grouped_by_expert = [ 2025-09-09T14:55:44.1818388Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:44.1819204Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:44.1819460Z 2025-09-09T14:55:44.1819616Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1820590Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1821467Z tokens_grouped_by_expert = [ 2025-09-09T14:55:44.1822194Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:44.1823003Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:44.1823256Z 2025-09-09T14:55:44.1823415Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1824390Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1825262Z tokens_grouped_by_expert = [ 2025-09-09T14:55:44.1825985Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:44.1826797Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:44.1827052Z 2025-09-09T14:55:44.1827208Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1828135Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1829017Z final_out = final_out.scatter_add( 2025-09-09T14:55:44.1829229Z 2025-09-09T14:55:44.1829385Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1830356Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1831217Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1831430Z 2025-09-09T14:55:44.1831586Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1832508Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1833366Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1833539Z 2025-09-09T14:55:44.1833705Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1834689Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1835586Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1835820Z 2025-09-09T14:55:44.1835994Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1836943Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1837931Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1838162Z 2025-09-09T14:55:44.1838375Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1839287Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1840164Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1840363Z 2025-09-09T14:55:44.1840517Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1841443Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1842325Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1842503Z 2025-09-09T14:55:44.1842658Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1843577Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1844495Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1844743Z 2025-09-09T14:55:44.1844899Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1845815Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1846702Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1846947Z 2025-09-09T14:55:44.1847104Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1848050Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1848926Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1849124Z 2025-09-09T14:55:44.1849292Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1850199Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1851065Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1851239Z 2025-09-09T14:55:44.1851394Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1852314Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1853203Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1853434Z 2025-09-09T14:55:44.1853589Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1854572Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1855455Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1855701Z 2025-09-09T14:55:44.1855862Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1856785Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1857648Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1857863Z 2025-09-09T14:55:44.1858021Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1858933Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1859805Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1859983Z 2025-09-09T14:55:44.1860190Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1861096Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1861989Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1862222Z 2025-09-09T14:55:44.1862379Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1863305Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1864193Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1864422Z 2025-09-09T14:55:44.1864580Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1865511Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1866370Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1866582Z 2025-09-09T14:55:44.1866737Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1867684Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1868532Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1868719Z 2025-09-09T14:55:44.1868873Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1869783Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1870676Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1870905Z 2025-09-09T14:55:44.1871073Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1872009Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1872899Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1873126Z 2025-09-09T14:55:44.1873283Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1874202Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1875151Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1875350Z 2025-09-09T14:55:44.1875509Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1876428Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1877279Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1877464Z 2025-09-09T14:55:44.1877694Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1878624Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1879505Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1879748Z 2025-09-09T14:55:44.1879910Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1880817Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1881708Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1881938Z 2025-09-09T14:55:44.1882107Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1883011Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1883925Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1884124Z 2025-09-09T14:55:44.1884280Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1885203Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1886064Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1886236Z 2025-09-09T14:55:44.1886393Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1887306Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1888181Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1888422Z 2025-09-09T14:55:44.1888578Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1889499Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1890377Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1890618Z 2025-09-09T14:55:44.1890774Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1891725Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1892599Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:44.1892798Z 2025-09-09T14:55:44.1892966Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1893868Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1894728Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:44.1894936Z 2025-09-09T14:55:44.1895093Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1896020Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1896910Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1897139Z 2025-09-09T14:55:44.1897294Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1898212Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1899088Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:44.1899329Z 2025-09-09T14:55:44.1899444Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1899785Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1900105Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1900435Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1900791Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1901126Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1901439Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1901770Z cudagraph partition due to non gpu ops 2025-09-09T14:55:44.1902131Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1903050Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1903967Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:55:44.1904214Z 2025-09-09T14:55:44.1904369Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:44.1905286Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:44.1906182Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:55:44.1906447Z 2025-09-09T14:55:48.5408187Z cudagraph partition due to non gpu ops 2025-09-09T14:55:48.5409513Z W0909 14:55:47.119931 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/3] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:48.5410910Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5411923Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5412808Z tokens_grouped_by_expert = [ 2025-09-09T14:55:48.5413554Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:48.5414355Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:48.5414612Z 2025-09-09T14:55:48.5414785Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5415707Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5416589Z tokens_grouped_by_expert = [ 2025-09-09T14:55:48.5417310Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:48.5418218Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:48.5418477Z 2025-09-09T14:55:48.5418647Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5419550Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5420481Z tokens_grouped_by_expert = [ 2025-09-09T14:55:48.5421230Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:48.5422105Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:48.5422362Z 2025-09-09T14:55:48.5422534Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5423443Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5424323Z tokens_grouped_by_expert = [ 2025-09-09T14:55:48.5425066Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:48.5425872Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:48.5426129Z 2025-09-09T14:55:48.5426300Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5427205Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5428142Z final_out = final_out.scatter_add( 2025-09-09T14:55:48.5428361Z 2025-09-09T14:55:48.5428530Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5429436Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5430306Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5430510Z 2025-09-09T14:55:48.5430665Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5431590Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5432453Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5432628Z 2025-09-09T14:55:48.5432780Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5433706Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5434717Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5434967Z 2025-09-09T14:55:48.5435124Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5436048Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5436928Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5437158Z 2025-09-09T14:55:48.5437326Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5438229Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5439099Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5439300Z 2025-09-09T14:55:48.5439468Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5440379Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5441241Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5441416Z 2025-09-09T14:55:48.5441571Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5442529Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5443426Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5443654Z 2025-09-09T14:55:48.5443809Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5444719Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5445636Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5445879Z 2025-09-09T14:55:48.5446042Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5446956Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5447814Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5448013Z 2025-09-09T14:55:48.5448185Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5449086Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5449954Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5450130Z 2025-09-09T14:55:48.5450298Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5451198Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5452124Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5452353Z 2025-09-09T14:55:48.5452509Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5453429Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5454311Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5454542Z 2025-09-09T14:55:48.5454699Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5455614Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5456469Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5456680Z 2025-09-09T14:55:48.5456838Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5457763Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5458653Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5458827Z 2025-09-09T14:55:48.5458997Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5459907Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5460792Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5461021Z 2025-09-09T14:55:48.5461191Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5462098Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5462978Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5463208Z 2025-09-09T14:55:48.5463366Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5464278Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5465145Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5465342Z 2025-09-09T14:55:48.5465530Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5466449Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5467306Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5467490Z 2025-09-09T14:55:48.5467644Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5468561Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5469472Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5469703Z 2025-09-09T14:55:48.5469869Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5470776Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5471661Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:48.5471888Z 2025-09-09T14:55:48.5472056Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5472957Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5473822Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:48.5474021Z 2025-09-09T14:55:48.5474175Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:48.5475212Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:48.5476086Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:48.5476259Z 2025-09-09T14:55:48.5476414Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0909230Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0910877Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0911123Z 2025-09-09T14:55:53.0911300Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0912218Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0913114Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0913345Z 2025-09-09T14:55:53.0913524Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0914714Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0915592Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:53.0915797Z 2025-09-09T14:55:53.0916006Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0916921Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0917786Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:53.0917962Z 2025-09-09T14:55:53.0918116Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0919039Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0919933Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0920163Z 2025-09-09T14:55:53.0920324Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0921240Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0922113Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0922355Z 2025-09-09T14:55:53.0922553Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0922897Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0923217Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0923549Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0923863Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0924196Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0924510Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0924883Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0925844Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0926750Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:55:53.0927000Z 2025-09-09T14:55:53.0927166Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0928071Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0928973Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:55:53.0929223Z 2025-09-09T14:55:53.0929347Z cudagraph partition due to non gpu ops 2025-09-09T14:55:53.0930190Z W0909 14:55:51.569143 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/4] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:53.0931095Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0932051Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0932937Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0933661Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0934476Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0934729Z 2025-09-09T14:55:53.0934898Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0947807Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0948802Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0949548Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0950375Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0950749Z 2025-09-09T14:55:53.0950910Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0951833Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0952705Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0953452Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0954246Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0954515Z 2025-09-09T14:55:53.0954780Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0955703Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0956575Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0957314Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0958107Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0958375Z 2025-09-09T14:55:53.0958534Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0959495Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0960371Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0961104Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0961897Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0962211Z 2025-09-09T14:55:53.0962367Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0963277Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0964144Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0964883Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0965676Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0965947Z 2025-09-09T14:55:53.0966103Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0967012Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0967882Z tokens_grouped_by_expert = [ 2025-09-09T14:55:53.0968653Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:53.0969455Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:53.0969723Z 2025-09-09T14:55:53.0969878Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0970787Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0971672Z final_out = final_out.scatter_add( 2025-09-09T14:55:53.0971882Z 2025-09-09T14:55:53.0972051Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0972953Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0973821Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:53.0974019Z 2025-09-09T14:55:53.0974181Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0975104Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0976004Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:53.0976178Z 2025-09-09T14:55:53.0976336Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0977253Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0978132Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0978380Z 2025-09-09T14:55:53.0978536Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0979454Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0980330Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0980575Z 2025-09-09T14:55:53.0980732Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0981637Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0982509Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:53.0982706Z 2025-09-09T14:55:53.0983566Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0984488Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0985353Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:53.0985527Z 2025-09-09T14:55:53.0985683Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0986604Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0987541Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0987771Z 2025-09-09T14:55:53.0987926Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:53.0988851Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:53.0989728Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:53.0989973Z 2025-09-09T14:55:53.0990128Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4969600Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4970580Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.4970818Z 2025-09-09T14:55:57.4970998Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4972178Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4973078Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.4973256Z 2025-09-09T14:55:57.4973415Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4974352Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4975505Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4975942Z 2025-09-09T14:55:57.4976250Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4977215Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4978094Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4978342Z 2025-09-09T14:55:57.4978504Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4979498Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4980360Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.4980576Z 2025-09-09T14:55:57.4980735Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4981644Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4982511Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.4982687Z 2025-09-09T14:55:57.4982857Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4983766Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4984656Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4984891Z 2025-09-09T14:55:57.4985046Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4985968Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4986857Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4987145Z 2025-09-09T14:55:57.4987302Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4988227Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4989091Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.4989301Z 2025-09-09T14:55:57.4989462Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4990387Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4991297Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.4991483Z 2025-09-09T14:55:57.4991640Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4992550Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4993434Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4993663Z 2025-09-09T14:55:57.4993833Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4994834Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4995735Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.4995963Z 2025-09-09T14:55:57.4996120Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4997083Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.4997967Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.4998169Z 2025-09-09T14:55:57.4998324Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.4999248Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5000110Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5000297Z 2025-09-09T14:55:57.5000454Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5001378Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5002259Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5002502Z 2025-09-09T14:55:57.5002661Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5003600Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5004487Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5004714Z 2025-09-09T14:55:57.5004883Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5005784Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5006652Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5006850Z 2025-09-09T14:55:57.5007007Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5007934Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5008808Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5008982Z 2025-09-09T14:55:57.5009138Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5010233Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5011177Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5011425Z 2025-09-09T14:55:57.5011583Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5012507Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5013387Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5013635Z 2025-09-09T14:55:57.5013752Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5014127Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5014463Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5014801Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5015115Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5015442Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5015761Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5016134Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5017039Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5017951Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:55:57.5018200Z 2025-09-09T14:55:57.5018370Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5019268Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5020213Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:55:57.5020467Z 2025-09-09T14:55:57.5020581Z cudagraph partition due to non gpu ops 2025-09-09T14:55:57.5021430Z W0909 14:55:55.993929 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/5] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:55:57.5022333Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5023238Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5024114Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5024843Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5025648Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5025906Z 2025-09-09T14:55:57.5026078Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5027027Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5027898Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5028625Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5029439Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5029693Z 2025-09-09T14:55:57.5029851Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5030766Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5031634Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5032364Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5033179Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5033436Z 2025-09-09T14:55:57.5033594Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5034622Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5035508Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5036228Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5037040Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5037294Z 2025-09-09T14:55:57.5037451Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5038409Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5039318Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5040051Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5040858Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5041110Z 2025-09-09T14:55:57.5041278Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5042179Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5043051Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5043786Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5044579Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5044835Z 2025-09-09T14:55:57.5045035Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5045941Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5046821Z tokens_grouped_by_expert = [ 2025-09-09T14:55:57.5047561Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:55:57.5048365Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:55:57.5048619Z 2025-09-09T14:55:57.5048775Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5049702Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5050575Z final_out = final_out.scatter_add( 2025-09-09T14:55:57.5050807Z 2025-09-09T14:55:57.5050965Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5051918Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5052784Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5052999Z 2025-09-09T14:55:57.5053159Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5054070Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5054945Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5055118Z 2025-09-09T14:55:57.5055290Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5056198Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5057095Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5057330Z 2025-09-09T14:55:57.5057485Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5058403Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5059339Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5059572Z 2025-09-09T14:55:57.5059726Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5060643Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5061508Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5061719Z 2025-09-09T14:55:57.5061872Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5062799Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5063687Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5063870Z 2025-09-09T14:55:57.5064024Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5064927Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5065815Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5066045Z 2025-09-09T14:55:57.5066210Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5067110Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5068000Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5068230Z 2025-09-09T14:55:57.5068387Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5069329Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5070196Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5070394Z 2025-09-09T14:55:57.5070549Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5071476Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5072335Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5072524Z 2025-09-09T14:55:57.5072685Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5073603Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5074482Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5074807Z 2025-09-09T14:55:57.5075001Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5075908Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5076804Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5077033Z 2025-09-09T14:55:57.5077207Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5078119Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5078998Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5079199Z 2025-09-09T14:55:57.5079358Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5080287Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5081159Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5081335Z 2025-09-09T14:55:57.5081493Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5082415Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5083325Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5083571Z 2025-09-09T14:55:57.5083728Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5084654Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5085530Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5085773Z 2025-09-09T14:55:57.5085928Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5086862Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5087734Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5087932Z 2025-09-09T14:55:57.5088098Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5089007Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5089874Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5090048Z 2025-09-09T14:55:57.5090203Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5091120Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5092011Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5092241Z 2025-09-09T14:55:57.5092427Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5093347Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5094220Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5094459Z 2025-09-09T14:55:57.5094619Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5095533Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5096388Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5096605Z 2025-09-09T14:55:57.5096763Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5097674Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5098548Z y3 = F.linear(cur_x, w3) 2025-09-09T14:55:57.5098751Z 2025-09-09T14:55:57.5098919Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5099826Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5100717Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5100947Z 2025-09-09T14:55:57.5101103Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5102019Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5102904Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:55:57.5103134Z 2025-09-09T14:55:57.5103288Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:55:57.5104213Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:55:57.5105065Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:55:57.5105275Z 2025-09-09T14:55:57.5105432Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6452686Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6453607Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:01.6453788Z 2025-09-09T14:56:01.6453951Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6454881Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6455815Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6456130Z 2025-09-09T14:56:01.6456307Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6457230Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6458121Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6458351Z 2025-09-09T14:56:01.6458470Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6458813Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6459138Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6459469Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6459800Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6460121Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6460450Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6460812Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6461792Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6462694Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:01.6462957Z 2025-09-09T14:56:01.6463111Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6464039Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6464934Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:01.6465199Z 2025-09-09T14:56:01.6465314Z cudagraph partition due to non gpu ops 2025-09-09T14:56:01.6466165Z W0909 14:56:00.327529 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/6] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:56:01.6467074Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6467991Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6468903Z tokens_grouped_by_expert = [ 2025-09-09T14:56:01.6469641Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:01.6470438Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:01.6470710Z 2025-09-09T14:56:01.6470865Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6471780Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6472637Z tokens_grouped_by_expert = [ 2025-09-09T14:56:01.6473375Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:01.6474172Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:01.6474436Z 2025-09-09T14:56:01.6474672Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6475592Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6476459Z tokens_grouped_by_expert = [ 2025-09-09T14:56:01.6477244Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:01.6478042Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:01.6478309Z 2025-09-09T14:56:01.6478465Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6479381Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6480308Z tokens_grouped_by_expert = [ 2025-09-09T14:56:01.6481051Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:01.6481847Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:01.6482113Z 2025-09-09T14:56:01.6482270Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6483190Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6484054Z final_out = final_out.scatter_add( 2025-09-09T14:56:01.6484264Z 2025-09-09T14:56:01.6484431Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6485337Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6486210Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:01.6486411Z 2025-09-09T14:56:01.6486608Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6487529Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6488390Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:01.6488563Z 2025-09-09T14:56:01.6488719Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6489638Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6490531Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6490761Z 2025-09-09T14:56:01.6490916Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6491841Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6492718Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6492993Z 2025-09-09T14:56:01.6493148Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6494067Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6494931Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:01.6495127Z 2025-09-09T14:56:01.6495294Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6496201Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6497064Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:01.6497238Z 2025-09-09T14:56:01.6497404Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6498313Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6499207Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6499436Z 2025-09-09T14:56:01.6499590Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6500548Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6501438Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6501667Z 2025-09-09T14:56:01.6501820Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6502736Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6503592Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:01.6503802Z 2025-09-09T14:56:01.6503988Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6504903Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6505754Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:01.6505928Z 2025-09-09T14:56:01.6506095Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6507007Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6507898Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6508129Z 2025-09-09T14:56:01.6508295Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6509199Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6510422Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6510743Z 2025-09-09T14:56:01.6510903Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6511822Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6512699Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:01.6512899Z 2025-09-09T14:56:01.6513061Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6513979Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6514908Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:01.6515095Z 2025-09-09T14:56:01.6515251Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6516175Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6517120Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6517359Z 2025-09-09T14:56:01.6517530Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:01.6518440Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:01.6519328Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:01.6519557Z 2025-09-09T14:56:01.6519726Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7894240Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7895589Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:05.7895893Z 2025-09-09T14:56:05.7896084Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7897349Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7898334Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:05.7898512Z 2025-09-09T14:56:05.7898684Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7899930Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7900904Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7901138Z 2025-09-09T14:56:05.7901353Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7902351Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7903313Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7903652Z 2025-09-09T14:56:05.7903842Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7904832Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7905758Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:05.7906014Z 2025-09-09T14:56:05.7906192Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7907183Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7908104Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:05.7908311Z 2025-09-09T14:56:05.7908511Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7909481Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7910703Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7911011Z 2025-09-09T14:56:05.7911183Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7912158Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7913124Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7913416Z 2025-09-09T14:56:05.7913532Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7913867Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7914261Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7914641Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7915041Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7915356Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7915789Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7916768Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7917818Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:05.7918124Z 2025-09-09T14:56:05.7918291Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7919265Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7920311Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:05.7920563Z 2025-09-09T14:56:05.7920679Z cudagraph partition due to non gpu ops 2025-09-09T14:56:05.7921600Z W0909 14:56:04.405723 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/7] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:56:05.7922636Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7923618Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7924558Z tokens_grouped_by_expert = [ 2025-09-09T14:56:05.7925354Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:05.7926281Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:05.7926607Z 2025-09-09T14:56:05.7926777Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7927752Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7928619Z tokens_grouped_by_expert = [ 2025-09-09T14:56:05.7929406Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:05.7930271Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:05.7930524Z 2025-09-09T14:56:05.7930690Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7931659Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7932539Z tokens_grouped_by_expert = [ 2025-09-09T14:56:05.7933260Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:05.7934065Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:05.7934322Z 2025-09-09T14:56:05.7934495Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7935466Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7936385Z tokens_grouped_by_expert = [ 2025-09-09T14:56:05.7937112Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:05.7937920Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:05.7938171Z 2025-09-09T14:56:05.7938329Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7939248Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7940130Z final_out = final_out.scatter_add( 2025-09-09T14:56:05.7940344Z 2025-09-09T14:56:05.7940500Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7941418Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7942345Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:05.7942636Z 2025-09-09T14:56:05.7942792Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7943712Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7944565Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:05.7944755Z 2025-09-09T14:56:05.7944910Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7945813Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7946702Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7946935Z 2025-09-09T14:56:05.7947102Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7948005Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7948898Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7949127Z 2025-09-09T14:56:05.7949282Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7950229Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7951096Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:05.7951295Z 2025-09-09T14:56:05.7951450Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7952364Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7953211Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:05.7953396Z 2025-09-09T14:56:05.7953585Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7954498Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7955528Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7955772Z 2025-09-09T14:56:05.7955928Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7956830Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7957719Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7957946Z 2025-09-09T14:56:05.7958114Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7959018Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7959884Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:05.7960081Z 2025-09-09T14:56:05.7960306Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7961228Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7962094Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:05.7962268Z 2025-09-09T14:56:05.7962426Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7963348Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:05.7964222Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:05.7964467Z 2025-09-09T14:56:05.7964621Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:05.7965536Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0266364Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0266936Z 2025-09-09T14:56:10.0267122Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0268074Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0268963Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0269168Z 2025-09-09T14:56:10.0269340Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0270250Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0271124Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0271298Z 2025-09-09T14:56:10.0271469Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0272390Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0273374Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0273605Z 2025-09-09T14:56:10.0273762Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0275546Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0276862Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0277317Z 2025-09-09T14:56:10.0277507Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0278427Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0279288Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0279609Z 2025-09-09T14:56:10.0279771Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0280699Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0281558Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0281731Z 2025-09-09T14:56:10.0281901Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0282808Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0283699Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0283930Z 2025-09-09T14:56:10.0284098Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0285004Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0285949Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0286183Z 2025-09-09T14:56:10.0286348Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0287683Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0288573Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0288774Z 2025-09-09T14:56:10.0288929Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0289845Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0290694Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0290881Z 2025-09-09T14:56:10.0291035Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0291957Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0292920Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0293151Z 2025-09-09T14:56:10.0293318Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0294223Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0295118Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0295346Z 2025-09-09T14:56:10.0295513Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0296417Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0297284Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0297484Z 2025-09-09T14:56:10.0297637Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0298557Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0299420Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0299595Z 2025-09-09T14:56:10.0299748Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0300700Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0301575Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0301817Z 2025-09-09T14:56:10.0301973Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0302942Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0303871Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0304105Z 2025-09-09T14:56:10.0304221Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0304556Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0304876Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0305210Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0305526Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0305860Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0306190Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0306551Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0307470Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0308373Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:10.0308633Z 2025-09-09T14:56:10.0308789Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0309736Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0310955Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:10.0311210Z 2025-09-09T14:56:10.0311340Z cudagraph partition due to non gpu ops 2025-09-09T14:56:10.0312189Z W0909 14:56:08.518209 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/8] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:56:10.0313098Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0314020Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0314956Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0315710Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0316585Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0316856Z 2025-09-09T14:56:10.0317014Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0317946Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0318814Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0319556Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0320353Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0320622Z 2025-09-09T14:56:10.0320778Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0321698Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0322567Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0323303Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0324099Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0324363Z 2025-09-09T14:56:10.0324559Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0325480Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0326342Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0327077Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0327869Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0328180Z 2025-09-09T14:56:10.0328336Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0329244Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0330116Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0330853Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0331642Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0331914Z 2025-09-09T14:56:10.0332069Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0332972Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0333841Z tokens_grouped_by_expert = [ 2025-09-09T14:56:10.0334613Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:10.0335413Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:10.0335677Z 2025-09-09T14:56:10.0335831Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0336743Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0337612Z final_out = final_out.scatter_add( 2025-09-09T14:56:10.0337849Z 2025-09-09T14:56:10.0338005Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0338929Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0339792Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0340005Z 2025-09-09T14:56:10.0340164Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0341081Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0341979Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0342154Z 2025-09-09T14:56:10.0342324Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0343238Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0344136Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0344369Z 2025-09-09T14:56:10.0344526Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0345451Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0346348Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0346582Z 2025-09-09T14:56:10.0346743Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0347664Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0348524Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0348735Z 2025-09-09T14:56:10.0348947Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0349860Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0350712Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0350891Z 2025-09-09T14:56:10.0351059Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0351972Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0352961Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0353190Z 2025-09-09T14:56:10.0353358Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0354259Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0355227Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0355455Z 2025-09-09T14:56:10.0355608Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0356528Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0357400Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0357598Z 2025-09-09T14:56:10.0357752Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0358701Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0359560Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0359744Z 2025-09-09T14:56:10.0359898Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0360815Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0361697Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0361939Z 2025-09-09T14:56:10.0362094Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0362993Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0363883Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0364115Z 2025-09-09T14:56:10.0364288Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0365222Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0366094Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0366293Z 2025-09-09T14:56:10.0366453Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0367371Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0368238Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0368411Z 2025-09-09T14:56:10.0368566Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0369489Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0370371Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0370613Z 2025-09-09T14:56:10.0370767Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0371680Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0372584Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0372827Z 2025-09-09T14:56:10.0372981Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0373888Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0374765Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0374965Z 2025-09-09T14:56:10.0375132Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0376067Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0376935Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0377109Z 2025-09-09T14:56:10.0377263Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0378183Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0379068Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0379295Z 2025-09-09T14:56:10.0379448Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0380363Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0381237Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0381480Z 2025-09-09T14:56:10.0381657Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0382573Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0383426Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0383635Z 2025-09-09T14:56:10.0383789Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0384688Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0385553Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0385725Z 2025-09-09T14:56:10.0385891Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0386792Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0387681Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0387939Z 2025-09-09T14:56:10.0388094Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0389014Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0389897Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0390128Z 2025-09-09T14:56:10.0390282Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0391195Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0392049Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0392258Z 2025-09-09T14:56:10.0392411Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0393334Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0394190Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0394366Z 2025-09-09T14:56:10.0394596Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0395549Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0396444Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0396674Z 2025-09-09T14:56:10.0396841Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0397742Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0398629Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0398858Z 2025-09-09T14:56:10.0399014Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0399967Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0400848Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:10.0401049Z 2025-09-09T14:56:10.0401205Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0402120Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0402971Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:10.0403157Z 2025-09-09T14:56:10.0403310Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:10.0404227Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:10.0405101Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:10.0405345Z 2025-09-09T14:56:14.2337386Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2338964Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2340188Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2340496Z 2025-09-09T14:56:14.2340642Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2341107Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2341443Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2341986Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2342324Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2342642Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2342972Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2343287Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2343672Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2344594Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2345602Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:14.2345853Z 2025-09-09T14:56:14.2346022Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2346931Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2347840Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:14.2348092Z 2025-09-09T14:56:14.2348206Z cudagraph partition due to non gpu ops 2025-09-09T14:56:14.2349056Z W0909 14:56:12.800066 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/9] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:56:14.2349962Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2350885Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2351759Z tokens_grouped_by_expert = [ 2025-09-09T14:56:14.2352553Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:14.2353370Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:14.2353626Z 2025-09-09T14:56:14.2353800Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2354810Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2355687Z tokens_grouped_by_expert = [ 2025-09-09T14:56:14.2356421Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:14.2357303Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:14.2357556Z 2025-09-09T14:56:14.2357723Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2358629Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2359502Z tokens_grouped_by_expert = [ 2025-09-09T14:56:14.2360228Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:14.2361033Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:14.2361288Z 2025-09-09T14:56:14.2361456Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2362401Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2363275Z tokens_grouped_by_expert = [ 2025-09-09T14:56:14.2363996Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:14.2364811Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:14.2365064Z 2025-09-09T14:56:14.2365236Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2366146Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2367025Z tokens_grouped_by_expert = [ 2025-09-09T14:56:14.2367746Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:14.2368552Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:14.2368806Z 2025-09-09T14:56:14.2368978Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2369913Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2370785Z final_out = final_out.scatter_add( 2025-09-09T14:56:14.2370994Z 2025-09-09T14:56:14.2371150Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2372074Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2372941Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:14.2373140Z 2025-09-09T14:56:14.2373295Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2374208Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2375068Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:14.2375256Z 2025-09-09T14:56:14.2375410Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2376328Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2377197Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2377463Z 2025-09-09T14:56:14.2377630Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2378534Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2379419Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2379647Z 2025-09-09T14:56:14.2379811Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2380715Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2381614Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:14.2381810Z 2025-09-09T14:56:14.2381965Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2382879Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2383744Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:14.2383916Z 2025-09-09T14:56:14.2384069Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2384979Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2385855Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2386095Z 2025-09-09T14:56:14.2386252Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2387199Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2388076Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2388304Z 2025-09-09T14:56:14.2388471Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2389376Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2390246Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:14.2390446Z 2025-09-09T14:56:14.2390617Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2391526Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2392406Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:14.2392578Z 2025-09-09T14:56:14.2392768Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2393689Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2394648Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2394882Z 2025-09-09T14:56:14.2395039Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2395960Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2396832Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2397074Z 2025-09-09T14:56:14.2397228Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2398148Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2399009Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:14.2399208Z 2025-09-09T14:56:14.2399375Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2400275Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2401181Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:14.2401363Z 2025-09-09T14:56:14.2401531Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2402439Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2403329Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2403585Z 2025-09-09T14:56:14.2403740Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:14.2404684Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:14.2405576Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:14.2405805Z 2025-09-09T14:56:14.2405974Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3403793Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3405368Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:18.3405648Z 2025-09-09T14:56:18.3406039Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3407474Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3408537Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:18.3408807Z 2025-09-09T14:56:18.3409221Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3410662Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3411564Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3411799Z 2025-09-09T14:56:18.3411964Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3412888Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3413769Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3414012Z 2025-09-09T14:56:18.3414170Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3415084Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3415941Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:18.3416249Z 2025-09-09T14:56:18.3416411Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3417318Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3418195Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:18.3418372Z 2025-09-09T14:56:18.3418543Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3419455Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3420344Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3420575Z 2025-09-09T14:56:18.3420730Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3421649Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3422536Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3422763Z 2025-09-09T14:56:18.3422920Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3423893Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3424750Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:18.3424963Z 2025-09-09T14:56:18.3425122Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3426043Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3426896Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:18.3427158Z 2025-09-09T14:56:18.3427313Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3428225Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3429116Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3429344Z 2025-09-09T14:56:18.3429514Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3430422Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3431304Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3431535Z 2025-09-09T14:56:18.3431698Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3432612Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3433521Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:18.3433723Z 2025-09-09T14:56:18.3433876Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3434872Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3435722Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:18.3435908Z 2025-09-09T14:56:18.3436064Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3436983Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3437854Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3438098Z 2025-09-09T14:56:18.3438252Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3439160Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3440081Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3440309Z 2025-09-09T14:56:18.3440437Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3440761Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3441101Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3441421Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3441751Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3442066Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3442391Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3442708Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3443085Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3444010Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3444906Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:18.3445170Z 2025-09-09T14:56:18.3445324Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3446229Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3447193Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:18.3447447Z 2025-09-09T14:56:18.3447572Z cudagraph partition due to non gpu ops 2025-09-09T14:56:18.3448417Z W0909 14:56:17.000132 320 site-packages/torch/fx/experimental/symbolic_shapes.py:6850] [1883/10] _maybe_guard_rel() was called on non-relation expression Eq(s61, 1) | Eq(s61, 16) 2025-09-09T14:56:18.3449331Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3450247Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3451166Z tokens_grouped_by_expert = [ 2025-09-09T14:56:18.3451910Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:18.3452714Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:18.3452981Z 2025-09-09T14:56:18.3453140Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3454051Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3454928Z tokens_grouped_by_expert = [ 2025-09-09T14:56:18.3455663Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:18.3456457Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:18.3456725Z 2025-09-09T14:56:18.3456909Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3457819Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3458690Z tokens_grouped_by_expert = [ 2025-09-09T14:56:18.3459429Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:18.3460221Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:18.3460476Z 2025-09-09T14:56:18.3460640Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3461547Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 155, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3462417Z tokens_grouped_by_expert = [ 2025-09-09T14:56:18.3463152Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 156, in 2025-09-09T14:56:18.3463975Z x[indices] for indices in token_indices_per_expert 2025-09-09T14:56:18.3464229Z 2025-09-09T14:56:18.3464395Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3465303Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 184, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3466180Z final_out = final_out.scatter_add( 2025-09-09T14:56:18.3466392Z 2025-09-09T14:56:18.3466557Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3467460Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3468328Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:18.3468526Z 2025-09-09T14:56:18.3468679Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3469598Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3470460Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:18.3470633Z 2025-09-09T14:56:18.3470788Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:18.3471738Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:18.3472624Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:18.3472865Z 2025-09-09T14:56:18.3473019Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8495305Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8512709Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8513712Z 2025-09-09T14:56:20.8514178Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8516389Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8518369Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8518808Z 2025-09-09T14:56:20.8519177Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8521094Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8522567Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8522832Z 2025-09-09T14:56:20.8523100Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8524528Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8526194Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8526602Z 2025-09-09T14:56:20.8526857Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8528370Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8529958Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8530357Z 2025-09-09T14:56:20.8530632Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8532183Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8533676Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8534025Z 2025-09-09T14:56:20.8534281Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8535876Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8537549Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8537852Z 2025-09-09T14:56:20.8538120Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8539672Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8541273Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8541690Z 2025-09-09T14:56:20.8541995Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8543706Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8545449Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8545893Z 2025-09-09T14:56:20.8546147Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8547678Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8549125Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8549495Z 2025-09-09T14:56:20.8549776Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8551653Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8553271Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8553601Z 2025-09-09T14:56:20.8553876Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8555677Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8557385Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8557824Z 2025-09-09T14:56:20.8558104Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8559646Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8561090Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8561499Z 2025-09-09T14:56:20.8561791Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8563496Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8565124Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8565486Z 2025-09-09T14:56:20.8565762Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8567440Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8569031Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8569347Z 2025-09-09T14:56:20.8569628Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8571361Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8573010Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8573450Z 2025-09-09T14:56:20.8573729Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8575458Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8577102Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8577537Z 2025-09-09T14:56:20.8577817Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8579522Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8581040Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8581364Z 2025-09-09T14:56:20.8581632Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8583236Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8584845Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8585180Z 2025-09-09T14:56:20.8585468Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8587081Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8588592Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8588983Z 2025-09-09T14:56:20.8589244Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8590836Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8592387Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8592784Z 2025-09-09T14:56:20.8593164Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8594796Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 166, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8596356Z y1 = F.silu(F.linear(cur_x, w1)) 2025-09-09T14:56:20.8596723Z 2025-09-09T14:56:20.8596983Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8598488Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 167, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8600178Z y3 = F.linear(cur_x, w3) 2025-09-09T14:56:20.8600501Z 2025-09-09T14:56:20.8600786Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8602461Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8604189Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8604635Z 2025-09-09T14:56:20.8604930Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8606462Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 170, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8607918Z cur_out = F.linear(y1 * y3, y2) # [T'(e), D] 2025-09-09T14:56:20.8608303Z 2025-09-09T14:56:20.8608497Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8609072Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8609817Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8610657Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8611249Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8611815Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8612335Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8612927Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8614483Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 174, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8616059Z ordered_outs = torch.cat(outs, dim=0) # [T*A, D] 2025-09-09T14:56:20.8616510Z 2025-09-09T14:56:20.8616780Z cudagraph partition due to non gpu ops. Found from : 2025-09-09T14:56:20.8618360Z File "/opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/moe_quant/quantizable_moe_modules.py", line 179, in torch_dynamo_resume_in_forward_at_152 2025-09-09T14:56:20.8619939Z ordered_outs * ordered_token_activation_weights 2025-09-09T14:56:20.8620529Z 2025-09-09T14:56:20.8620745Z cudagraph partition due to non gpu ops 2025-09-09T14:56:20.8621571Z PASSED 2025-09-09T14:56:20.8622694Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_fake_dim_0_single_token SKIPPED 2025-09-09T14:56:20.8624471Z test/quantization/test_moe_quant.py::TestMoEQuantCompile::test_int8wo_fake_dim_1_multiple_tokens SKIPPED 2025-09-09T14:56:20.8626286Z test/quantization/test_observer.py::TestQuantFlow::test_block_size_calc_success PASSED 2025-09-09T14:56:20.8627947Z test/quantization/test_observer.py::TestQuantFlow::test_block_size_row_errors PASSED 2025-09-09T14:56:20.8629626Z test/quantization/test_observer.py::TestQuantFlow::test_fixed_qparams_observer PASSED 2025-09-09T14:56:20.8631196Z test/quantization/test_observer.py::TestQuantFlow::test_min_max_per_channel_affine PASSED 2025-09-09T14:56:20.8632673Z test/quantization/test_observer.py::TestQuantFlow::test_min_max_per_tensor_affine PASSED 2025-09-09T14:56:20.8634276Z test/quantization/test_observer.py::TestQuantFlow::test_mse_observer PASSED 2025-09-09T14:56:39.9125486Z test/quantization/test_observer.py::TestLinearObserver::test_linear_observer_tensor_observe_weight_False PASSED 2025-09-09T14:56:39.9126884Z test/quantization/test_observer.py::TestLinearObserver::test_linear_observer_tensor_observe_weight_True PASSED 2025-09-09T14:56:39.9127835Z test/quantization/test_qat.py::TestQAT::test_composable_qat_quantizer PASSED 2025-09-09T14:56:39.9128606Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_dtype PASSED 2025-09-09T14:56:39.9129573Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_dynamic_and_range_learning PASSED 2025-09-09T14:56:39.9130507Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_eps PASSED 2025-09-09T14:56:39.9131412Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_granularity PASSED 2025-09-09T14:56:39.9132363Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_granularity_error_cases PASSED 2025-09-09T14:56:39.9133374Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_mapping_type PASSED 2025-09-09T14:56:39.9134207Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_config_torch_intx PASSED 2025-09-09T14:56:39.9135014Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_per_channel_group PASSED 2025-09-09T14:56:39.9135797Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_per_token PASSED 2025-09-09T14:56:39.9136647Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_per_token_vs_convert_bfloat16 PASSED 2025-09-09T14:56:39.9137563Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_per_token_vs_convert_float16 PASSED 2025-09-09T14:56:39.9138687Z test/quantization/test_qat.py::TestQAT::test_fake_quantize_per_token_vs_convert_float32 PASSED 2025-09-09T14:56:39.9139550Z test/quantization/test_qat.py::TestQAT::test_fake_quantized_embedding_4w PASSED 2025-09-09T14:56:39.9140327Z test/quantization/test_qat.py::TestQAT::test_fake_quantized_linear_4w PASSED 2025-09-09T14:56:39.9141092Z test/quantization/test_qat.py::TestQAT::test_fake_quantized_linear_8da4w PASSED 2025-09-09T14:56:39.9142138Z test/quantization/test_qat.py::TestQAT::test_fake_quantizer_range_learning_is_symmetric_False PASSED 2025-09-09T14:56:39.9143128Z test/quantization/test_qat.py::TestQAT::test_fake_quantizer_range_learning_is_symmetric_True PASSED 2025-09-09T14:56:39.9143965Z test/quantization/test_qat.py::TestQAT::test_fake_quantizer_repr PASSED 2025-09-09T14:56:39.9144709Z test/quantization/test_qat.py::TestQAT::test_fbgemm_fp8_primitives SKIPPED 2025-09-09T14:56:39.9145523Z test/quantization/test_qat.py::TestQAT::test_fbgemm_int4_preshuffled_primitives SKIPPED 2025-09-09T14:56:39.9146471Z test/quantization/test_qat.py::TestQAT::test_float8_fake_quantize_config PASSED 2025-09-09T14:56:39.9147397Z test/quantization/test_qat.py::TestQAT::test_float8_fake_quantize_granularity0 PASSED 2025-09-09T14:56:39.9148240Z test/quantization/test_qat.py::TestQAT::test_float8_fake_quantize_granularity1 PASSED 2025-09-09T14:56:39.9149027Z test/quantization/test_qat.py::TestQAT::test_infer_fp8_int4_config PASSED 2025-09-09T14:56:39.9149787Z test/quantization/test_qat.py::TestQAT::test_infer_int4_weight_only_config PASSED 2025-09-09T14:56:39.9150700Z test/quantization/test_qat.py::TestQAT::test_legacy_quantize_api_e2e PASSED 2025-09-09T14:56:39.9151446Z test/quantization/test_qat.py::TestQAT::test_qat_4w_embedding PASSED 2025-09-09T14:56:39.9152151Z test/quantization/test_qat.py::TestQAT::test_qat_4w_linear SKIPPED (...) 2025-09-09T14:56:39.9152883Z test/quantization/test_qat.py::TestQAT::test_qat_4w_primitives SKIPPED 2025-09-09T14:56:39.9153590Z test/quantization/test_qat.py::TestQAT::test_qat_4w_quantizer SKIPPED 2025-09-09T14:56:39.9154336Z test/quantization/test_qat.py::TestQAT::test_qat_4w_quantizer_gradients PASSED 2025-09-09T14:56:39.9155165Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_eps PASSED 2025-09-09T14:56:39.9155926Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_linear PASSED 2025-09-09T14:56:39.9156729Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_prepare_vs_convert_bfloat16 PASSED 2025-09-09T14:56:39.9157587Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_prepare_vs_convert_float16 PASSED 2025-09-09T14:56:39.9158452Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_prepare_vs_convert_float32 PASSED 2025-09-09T14:56:39.9159230Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_quantizer PASSED 2025-09-09T14:56:39.9160080Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_quantizer_disable_fake_quant PASSED 2025-09-09T14:56:39.9161014Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_quantizer_disable_fake_quant_backward PASSED 2025-09-09T14:56:39.9161877Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_quantizer_gradients PASSED 2025-09-09T14:56:39.9162726Z test/quantization/test_qat.py::TestQAT::test_qat_8da4w_quantizer_meta_weights PASSED 2025-09-09T14:56:39.9163545Z test/quantization/test_qat.py::TestQAT::test_qat_api_convert_no_quantization PASSED 2025-09-09T14:56:39.9164302Z test/quantization/test_qat.py::TestQAT::test_qat_api_deprecation PASSED 2025-09-09T14:56:39.9165009Z test/quantization/test_qat.py::TestQAT::test_qat_config_init PASSED 2025-09-09T14:56:39.9165707Z test/quantization/test_qat.py::TestQAT::test_qat_fp8a4w_quantizer PASSED 2025-09-09T14:56:39.9166484Z test/quantization/test_qat.py::TestQAT::test_qat_linear_bias PASSED 2025-09-09T14:56:39.9167583Z test/quantization/test_qat.py::TestQAT::test_qat_nvfp4_use_per_tensor_scale_False SKIPPED 2025-09-09T14:56:39.9168762Z test/quantization/test_qat.py::TestQAT::test_qat_nvfp4_use_per_tensor_scale_True SKIPPED 2025-09-09T14:56:39.9169806Z test/quantization/test_qat.py::TestQAT::test_qat_prototype_bc PASSED 2025-09-09T14:56:39.9170859Z test/quantization/test_qat.py::TestQAT::test_qat_range_learning_is_symmetric_False PASSED 2025-09-09T14:56:39.9172027Z test/quantization/test_qat.py::TestQAT::test_qat_range_learning_is_symmetric_True PASSED 2025-09-09T14:56:39.9173054Z test/quantization/test_qat.py::TestQAT::test_quantize_api_e2e PASSED 2025-09-09T14:56:39.9174006Z test/quantization/test_qat.py::TestQAT::test_quantize_api_errors PASSED 2025-09-09T14:56:39.9175073Z test/quantization/test_qat.py::TestQAT::test_quantize_api_fp8_fp8_granularity0 SKIPPED 2025-09-09T14:56:39.9176213Z test/quantization/test_qat.py::TestQAT::test_quantize_api_fp8_fp8_granularity1 SKIPPED 2025-09-09T14:56:39.9177330Z test/quantization/test_qat.py::TestQAT::test_quantize_api_fp8_int4 SKIPPED 2025-09-09T14:56:39.9178353Z test/quantization/test_qat.py::TestQAT::test_quantize_api_int4_version_1 SKIPPED 2025-09-09T14:56:39.9179412Z test/quantization/test_qat.py::TestQAT::test_quantize_api_int4_version_2 SKIPPED 2025-09-09T14:56:39.9180454Z test/quantization/test_qat.py::TestQAT::test_quantize_api_int8_int4 SKIPPED 2025-09-09T14:56:39.9181430Z test/quantization/test_qat.py::TestQAT::test_quantize_api_nvfp4 SKIPPED 2025-09-09T14:56:39.9182408Z test/quantization/test_qat.py::TestQAT::test_quantize_api_prepare PASSED 2025-09-09T14:56:39.9183378Z test/quantization/test_qat.py::TestQAT::test_replace_linear_8da4w PASSED 2025-09-09T14:56:39.9184344Z test/quantization/test_qat.py::TestQAT::test_replace_linear_int4 PASSED 2025-09-09T14:56:39.9185386Z test/quantization/test_quant_api.py::TestQuantFlow::test_8da4w_quantizer PASSED 2025-09-09T14:56:39.9186548Z test/quantization/test_quant_api.py::TestQuantFlow::test_8da4w_quantizer_linear_bias PASSED 2025-09-09T14:56:39.9187778Z test/quantization/test_quant_api.py::TestQuantFlow::test_dynamic_quant_gpu_singleline PASSED 2025-09-09T14:56:39.9189201Z test/quantization/test_quant_api.py::TestQuantFlow::test_dynamic_quant_gpu_unified_api_eager_mode_impl SKIPPED 2025-09-09T14:56:39.9190672Z test/quantization/test_quant_api.py::TestQuantFlow::test_dynamic_quant_gpu_unified_api_unified_impl SKIPPED 2025-09-09T14:56:39.9191892Z test/quantization/test_quant_api.py::TestQuantFlow::test_eval_wrapper SKIPPED 2025-09-09T14:56:39.9193006Z test/quantization/test_quant_api.py::TestQuantFlow::test_eval_wrapper_llama3 SKIPPED 2025-09-09T14:56:39.9194176Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4_wo_quant_save_load SKIPPED 2025-09-09T14:56:39.9195614Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_bfloat16_x_dim_2_use_hqq_False PASSED 2025-09-09T14:56:39.9197006Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_bfloat16_x_dim_2_use_hqq_True PASSED 2025-09-09T14:56:39.9198387Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_bfloat16_x_dim_3_use_hqq_False PASSED 2025-09-09T14:56:39.9199789Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_bfloat16_x_dim_3_use_hqq_True PASSED 2025-09-09T14:56:39.9201158Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float16_x_dim_2_use_hqq_False PASSED 2025-09-09T14:56:39.9202535Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float16_x_dim_2_use_hqq_True PASSED 2025-09-09T14:56:39.9203919Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float16_x_dim_3_use_hqq_False PASSED 2025-09-09T14:56:39.9205321Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float16_x_dim_3_use_hqq_True PASSED 2025-09-09T14:56:39.9206711Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float32_x_dim_2_use_hqq_False PASSED 2025-09-09T14:56:39.9208076Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float32_x_dim_2_use_hqq_True PASSED 2025-09-09T14:56:39.9209438Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float32_x_dim_3_use_hqq_False PASSED 2025-09-09T14:56:39.9211035Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cpu_float32_x_dim_3_use_hqq_True PASSED 2025-09-09T14:56:39.9995753Z test/quantization/test_quant_api.py::TestQuantFlow::test_int4wo_cuda_serialization SKIPPED 2025-09-09T14:56:39.9996698Z test/quantization/test_quant_api.py::TestQuantFlow::test_int8_wo_quant_save_load SKIPPED 2025-09-09T14:56:39.9997643Z test/quantization/test_quant_api.py::TestQuantFlow::test_int8wo_quantized_model_to_device SKIPPED 2025-09-09T14:56:39.9998781Z test/quantization/test_quant_api.py::TestQuantFlow::test_module_fqn_to_config_default SKIPPED 2025-09-09T14:56:39.9999760Z test/quantization/test_quant_api.py::TestQuantFlow::test_module_fqn_to_config_embedding_linear PASSED 2025-09-09T14:56:40.0000736Z test/quantization/test_quant_api.py::TestQuantFlow::test_module_fqn_to_config_module_name SKIPPED 2025-09-09T14:56:40.0001669Z test/quantization/test_quant_api.py::TestQuantFlow::test_module_fqn_to_config_skip SKIPPED 2025-09-09T14:56:40.0002578Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_model_streaming SKIPPED 2025-09-09T14:56:40.0003599Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_8da4w_mapping_type0 PASSED 2025-09-09T14:56:40.0004687Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_8da4w_mapping_type1 PASSED 2025-09-09T14:56:40.0005705Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_int4 SKIPPED 2025-09-09T14:56:40.0006668Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_int8_wo SKIPPED 2025-09-09T14:56:40.0007669Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_save_load SKIPPED 2025-09-09T14:56:40.0008788Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantized_tensor_subclass_save_load_map_location SKIPPED 2025-09-09T14:56:40.0009812Z test/quantization/test_quant_api.py::TestQuantFlow::test_quantizer_int4_weight_only SKIPPED 2025-09-09T14:56:40.0011117Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config0 SKIPPED 2025-09-09T14:56:40.0012054Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config1 SKIPPED 2025-09-09T14:56:40.0013015Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config10 SKIPPED 2025-09-09T14:56:40.0014032Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config2 SKIPPED 2025-09-09T14:56:40.0014976Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config3 SKIPPED 2025-09-09T14:56:40.0015918Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config4 SKIPPED 2025-09-09T14:56:40.0016846Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config5 SKIPPED 2025-09-09T14:56:40.0017786Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config6 SKIPPED 2025-09-09T14:56:40.0018717Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config7 SKIPPED 2025-09-09T14:56:40.0019652Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config8 SKIPPED 2025-09-09T14:56:40.0020647Z test/quantization/test_quant_api.py::TestQuantFlow::test_workflow_e2e_numerics_config9 SKIPPED 2025-09-09T14:56:40.0021647Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_group_sym PASSED 2025-09-09T14:56:40.0022787Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_group_sym_no_clipping_err PASSED 2025-09-09T14:56:40.0023902Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_tensor_asym PASSED 2025-09-09T14:56:40.0024988Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_tensor_asym_eps PASSED 2025-09-09T14:56:40.0026061Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_tensor_sym PASSED 2025-09-09T14:56:40.0027102Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_choose_qparams_token_asym PASSED 2025-09-09T14:56:40.0028131Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_fake_quantize_affine PASSED 2025-09-09T14:56:40.0029232Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_fake_quantize_affine_cachemask PASSED 2025-09-09T14:56:40.0030299Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_get_group_qparams_symmetric PASSED 2025-09-09T14:56:40.0031413Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_get_group_qparams_symmetric_memory SKIPPED 2025-09-09T14:56:40.0032512Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_get_groupwise_affine_qparams PASSED 2025-09-09T14:56:40.0033693Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_groupwise_affine_dequantize_tensor_from_qparams PASSED 2025-09-09T14:56:40.0035036Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_groupwise_affine_quantize_tensor_from_qparams PASSED 2025-09-09T14:56:40.0036225Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_maybe_expand_scale_to_tensor_shape PASSED 2025-09-09T14:56:40.0037374Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_activation_per_token_abs_max PASSED 2025-09-09T14:56:40.0038563Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_activation_per_token_abs_max_dtype PASSED 2025-09-09T14:56:40.0039874Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_activation_per_token_abs_max_zero_input PASSED 2025-09-09T14:56:40.0041059Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_dequantize_channel_asym PASSED 2025-09-09T14:56:40.0042175Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_dequantize_channel_asym_4d PASSED 2025-09-09T14:56:40.0043417Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_dequantize_channel_asym_4d_multi_dim_reduction PASSED 2025-09-09T14:56:40.0044644Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_dequantize_group_sym PASSED 2025-09-09T14:56:40.0045741Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_quantize_dequantize_tensor_asym PASSED 2025-09-09T14:56:40.0046722Z test/quantization/test_quant_primitives.py::TestQuantPrimitives::test_raises PASSED 2025-09-09T14:56:40.0047555Z test/sparsity/test_activation24.py::test_sparse24_sm90_sparsify_identity SKIPPED 2025-09-09T14:56:40.0048406Z test/sparsity/test_activation24.py::test_sparse24_sm90_sparsify_identity_scaled SKIPPED 2025-09-09T14:56:40.0049222Z test/sparsity/test_activation24.py::test_sparse24_sm90_sparsify_srelu SKIPPED 2025-09-09T14:56:40.0050060Z test/sparsity/test_activation24.py::test_srelu_fp8_semi_sparse_activation_linear SKIPPED 2025-09-09T14:56:40.0050920Z test/sparsity/test_activation24.py::test_sparse24_fp8_sm90_cutlass_gemm_eye SKIPPED 2025-09-09T14:56:40.0051846Z test/sparsity/test_activation24.py::test_sparse24_fp8_sm90_cutlass_gemm_random_tensor SKIPPED 2025-09-09T14:56:40.0052966Z test/sparsity/test_fast_sparse_training.py::TestRuntimeSemiStructuredSparsity::test_runtime_weight_sparsification SKIPPED 2025-09-09T14:56:40.0054258Z test/sparsity/test_fast_sparse_training.py::TestRuntimeSemiStructuredSparsity::test_runtime_weight_sparsification_compile SKIPPED 2025-09-09T14:56:40.0055329Z test/sparsity/test_marlin.py::SparseMarlin24::test_pack_unpack_equivalence SKIPPED 2025-09-09T14:56:40.0056221Z test/sparsity/test_marlin.py::SparseMarlin24::test_quant_sparse_marlin_layout_compile SKIPPED 2025-09-09T14:56:40.0057129Z test/sparsity/test_marlin.py::SparseMarlin24::test_quant_sparse_marlin_layout_eager SKIPPED 2025-09-09T14:56:40.0057982Z test/sparsity/test_sparse_api.py::TestSemiStructuredSparse::test_sparse SKIPPED 2025-09-09T14:56:40.0058891Z test/sparsity/test_sparse_api.py::TestQuantSemiSparse::test_quant_semi_sparse_compile_False SKIPPED 2025-09-09T14:56:40.0059907Z test/sparsity/test_sparse_api.py::TestQuantSemiSparse::test_sparse_marlin_compile_False SKIPPED 2025-09-09T14:56:40.0060870Z test/sparsity/test_sparse_api.py::TestQuantSemiSparse::test_sparse_marlin_compile_True SKIPPED 2025-09-09T14:56:40.0061875Z test/sparsity/test_sparse_api.py::TestBlockSparseWeight::test_sparse_compile_False_input_shape_1 SKIPPED 2025-09-09T14:56:40.0062951Z test/sparsity/test_sparse_api.py::TestBlockSparseWeight::test_sparse_compile_False_input_shape_1024 SKIPPED 2025-09-09T14:56:40.0063993Z test/sparsity/test_sparse_api.py::TestBlockSparseWeight::test_sparse_compile_True_input_shape_1 SKIPPED 2025-09-09T14:56:40.0065043Z test/sparsity/test_sparse_api.py::TestBlockSparseWeight::test_sparse_compile_True_input_shape_1024 SKIPPED 2025-09-09T14:56:40.0066067Z test/sparsity/test_sparse_api.py::TestQuantBlockSparseWeight::test_sparse_compile_False SKIPPED 2025-09-09T14:56:40.0067049Z test/sparsity/test_sparse_api.py::TestQuantBlockSparseWeight::test_sparse_compile_True SKIPPED 2025-09-09T14:56:40.0067917Z test/sparsity/test_supermask.py::TestSupermask::test_from_linear SKIPPED 2025-09-09T14:56:40.0068809Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_25_blocksize_2 SKIPPED 2025-09-09T14:56:40.0069870Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_25_blocksize_4 SKIPPED 2025-09-09T14:56:40.0070904Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_25_blocksize_8 SKIPPED 2025-09-09T14:59:01.4594980Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_5_blocksize_2 SKIPPED 2025-09-09T14:59:01.4596192Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_5_blocksize_4 SKIPPED 2025-09-09T14:59:01.4597614Z test/sparsity/test_supermask.py::TestSupermask::test_supermask_sparsity_level_0_5_blocksize_8 SKIPPED 2025-09-09T14:59:01.4598580Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_one_layer_mlp_2x4 PASSED 2025-09-09T14:59:01.4599578Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_one_layer_mlp_unstructured PASSED 2025-09-09T14:59:01.4600430Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_prepare PASSED 2025-09-09T14:59:01.4601180Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_squash_mask PASSED 2025-09-09T14:59:01.4602039Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_two_layer_mlp_unstructured PASSED 2025-09-09T14:59:01.4603104Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_two_layer_mlp_unstructured_custom_config PASSED 2025-09-09T14:59:01.4604224Z test/test_ao_models.py::TorchAOBasicTestCase::test_ao_inference_mode_device_cpu_batch_size_1_is_training_False PASSED 2025-09-09T14:59:01.4605483Z test/test_ao_models.py::TorchAOBasicTestCase::test_ao_inference_mode_device_cpu_batch_size_1_is_training_True PASSED 2025-09-09T14:59:01.4606736Z test/test_ao_models.py::TorchAOBasicTestCase::test_ao_inference_mode_device_cpu_batch_size_4_is_training_False PASSED 2025-09-09T14:59:01.4607898Z test/test_ao_models.py::TorchAOBasicTestCase::test_ao_inference_mode_device_cpu_batch_size_4_is_training_True PASSED 2025-09-09T14:59:01.4609021Z test/test_low_bit_optim.py::TestQuantize::test_bf16_stochastic_round_device_cpu_compile_False PASSED 2025-09-09T14:59:01.4610215Z test/test_low_bit_optim.py::TestQuantize::test_bf16_stochastic_round_device_cpu_compile_True PASSED 2025-09-09T14:59:01.4611184Z test/test_low_bit_optim.py::TestQuantize::test_quantize_4bit_with_qmap_compile_device_cpu PASSED 2025-09-09T14:59:01.4612231Z test/test_low_bit_optim.py::TestQuantize::test_quantize_4bit_with_qmap_correctness_device_cpu PASSED 2025-09-09T14:59:01.4613207Z test/test_low_bit_optim.py::TestQuantize::test_quantize_8bit_with_qmap_compile_device_cpu PASSED 2025-09-09T14:59:01.4614258Z test/test_low_bit_optim.py::TestQuantize::test_quantize_8bit_with_qmap_correctness_device_cpu PASSED 2025-09-09T14:59:01.4615287Z test/test_low_bit_optim.py::TestOptim::test_optim_4bit_correctness_optim_name_Adam4bit SKIPPED 2025-09-09T14:59:01.4616229Z test/test_low_bit_optim.py::TestOptim::test_optim_4bit_correctness_optim_name_AdamW4bit SKIPPED 2025-09-09T14:59:01.4617142Z test/test_low_bit_optim.py::TestOptim::test_optim_8bit_correctness_optim_name_Adam8bit SKIPPED 2025-09-09T14:59:01.4618066Z test/test_low_bit_optim.py::TestOptim::test_optim_8bit_correctness_optim_name_AdamW8bit SKIPPED 2025-09-09T14:59:01.4618962Z test/test_low_bit_optim.py::TestOptim::test_optim_bf16_stochastic_round_correctness PASSED 2025-09-09T14:59:01.4619973Z test/test_low_bit_optim.py::TestOptim::test_optim_cpu_offload_correctness_offload_grad_False_grad_accum_1 SKIPPED 2025-09-09T14:59:01.4621176Z test/test_low_bit_optim.py::TestOptim::test_optim_cpu_offload_correctness_offload_grad_False_grad_accum_2 SKIPPED 2025-09-09T14:59:01.4622261Z test/test_low_bit_optim.py::TestOptim::test_optim_cpu_offload_correctness_offload_grad_True_grad_accum_1 SKIPPED 2025-09-09T14:59:01.4623277Z test/test_low_bit_optim.py::TestOptim::test_optim_cpu_offload_save_load SKIPPED 2025-09-09T14:59:01.4624258Z test/test_low_bit_optim.py::TestOptim::test_optim_default_dtype_bf16_optim_name_Adam4bit_device_cpu PASSED 2025-09-09T14:59:01.4625281Z test/test_low_bit_optim.py::TestOptim::test_optim_default_dtype_bf16_optim_name_Adam8bit_device_cpu PASSED 2025-09-09T14:59:01.4626303Z test/test_low_bit_optim.py::TestOptim::test_optim_default_dtype_bf16_optim_name_AdamFp8_device_cpu PASSED 2025-09-09T14:59:01.4627377Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_Adam4bit_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4628430Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_Adam4bit_float32_device_cpu PASSED 2025-09-09T14:59:01.4629447Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_Adam8bit_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4630440Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_Adam8bit_float32_device_cpu PASSED 2025-09-09T14:59:01.4631411Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamFp8_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4632389Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamFp8_float32_device_cpu PASSED 2025-09-09T14:59:01.4633436Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamW4bit_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4634407Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamW4bit_float32_device_cpu PASSED 2025-09-09T14:59:01.4635623Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamW8bit_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4636630Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamW8bit_float32_device_cpu PASSED 2025-09-09T14:59:01.4637606Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamWFp8_bfloat16_device_cpu PASSED 2025-09-09T14:59:01.4638643Z test/test_low_bit_optim.py::TestOptim::test_optim_smoke_optim_name_AdamWFp8_float32_device_cpu PASSED 2025-09-09T14:59:01.4639613Z test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_Adam4bit_device_cpu PASSED 2025-09-09T14:59:01.4640519Z test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_Adam8bit_device_cpu PASSED 2025-09-09T14:59:01.4641471Z test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_AdamFp8_device_cpu PASSED 2025-09-09T14:59:01.4642435Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass0_shape0_device_cpu PASSED 2025-09-09T14:59:01.4643405Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass0_shape1_device_cpu PASSED 2025-09-09T14:59:01.4644302Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass1_shape0_device_cpu PASSED 2025-09-09T14:59:01.4645308Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass1_shape1_device_cpu PASSED 2025-09-09T14:59:01.4646213Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass2_shape0_device_cpu PASSED 2025-09-09T14:59:01.4647127Z test/test_low_bit_optim.py::TestOptim::test_subclass_slice_subclass2_shape1_device_cpu PASSED 2025-09-09T14:59:01.4648317Z test/test_low_bit_optim.py::TestFSDP2::test_fsdp2 I0909 14:58:52.559091 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 0 with pid 13250 2025-09-09T14:59:01.4649517Z I0909 14:58:52.611672 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 1 with pid 13251 2025-09-09T14:59:01.4650403Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:59:01.4651098Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:59:01.4651597Z dist init r=1, world=2 2025-09-09T14:59:01.4651859Z dist init r=0, world=2 2025-09-09T14:59:01.4652170Z SKIPPED (Need at l...) 2025-09-09T14:59:01.4653093Z test/test_low_bit_optim.py::TestFSDP2::test_uneven_shard I0909 14:58:57.075349 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 0 with pid 13290 2025-09-09T14:59:01.4654383Z I0909 14:58:57.128323 320 site-packages/torch/testing/_internal/common_distributed.py:741] Started process 1 with pid 13291 2025-09-09T14:59:01.4655217Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:59:01.4655881Z The 8-bit optimizer is not available on your device, only available on CUDA for now. 2025-09-09T14:59:01.4656402Z dist init r=0, world=2 2025-09-09T14:59:01.4656677Z dist init r=1, world=2 2025-09-09T14:59:01.4657035Z SKIPPED (Ne...) 2025-09-09T14:59:01.4657652Z test/test_model_architecture.py::TestModels::test_ln_linear_activation_model_0_cpu PASSED 2025-09-09T14:59:01.4658489Z test/test_model_architecture.py::TestModels::test_toy_linear_model_0_cpu PASSED 2025-09-09T14:59:01.4659301Z test/test_model_architecture.py::TestModels::test_transformer_block_0_cpu PASSED 2025-09-09T14:59:01.4660377Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_1_OC_2048_IC_4096_splitK_5_ebits_2_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4661537Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_1_OC_2048_IC_4096_splitK_5_ebits_2_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4662828Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_1_OC_2048_IC_4096_splitK_5_ebits_3_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4664082Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_1_OC_2048_IC_4096_splitK_5_ebits_3_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4665215Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_2_OC_8192_IC_8192_splitK_6_ebits_2_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4666414Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_2_OC_8192_IC_8192_splitK_6_ebits_2_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4667531Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_2_OC_8192_IC_8192_splitK_6_ebits_3_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4668735Z test/test_ops.py::TestOps::test_quant_llm_linear_correctness_BS_2_OC_8192_IC_8192_splitK_6_ebits_3_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4669672Z test/test_ops.py::TestOps::test_quant_llm_linear_ebits_2_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4936107Z test/test_ops.py::TestOps::test_quant_llm_linear_ebits_2_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4936928Z test/test_ops.py::TestOps::test_quant_llm_linear_ebits_3_mbits_2_bfloat16 SKIPPED 2025-09-09T14:59:01.4938068Z test/test_ops.py::TestOps::test_quant_llm_linear_ebits_3_mbits_2_float16 SKIPPED 2025-09-09T14:59:01.4939092Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4940503Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4941745Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4943147Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4944537Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4945769Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4947271Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4948503Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4949845Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4951098Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4952397Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4953729Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4955056Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4956290Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4957547Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4958891Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4960182Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4961508Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4962768Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4964084Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4965337Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4966568Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4967951Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4969207Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4970524Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4971755Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4973043Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4974302Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4975535Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4976897Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4978146Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4979445Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4980676Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4982058Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4983293Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4984580Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4985829Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4987047Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4988412Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4989666Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4990956Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.4992202Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4993527Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.4994834Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.4996093Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.4997443Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.4998674Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5000005Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_120_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5001248Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5002476Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5003799Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5005031Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5294768Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5296015Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5297218Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5298409Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5299706Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5300905Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5302108Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5303315Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_18_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5304507Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5305754Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5306960Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5308181Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5309382Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5310860Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5312117Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5313352Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5314711Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5316155Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5317371Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5318627Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_16_q_seq_len_89_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5319872Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5321090Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5322330Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5323622Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5324831Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5326065Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5327309Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5328565Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5329801Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5331027Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5332249Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5333485Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_18_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5334751Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5335981Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5337215Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5338439Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5339657Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5340882Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_100_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5342133Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_bfloat16 SKIPPED 2025-09-09T14:59:01.5343385Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_float32 SKIPPED 2025-09-09T14:59:01.5344609Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_32_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5345842Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_bfloat16 SKIPPED 2025-09-09T14:59:01.5347070Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_float32 SKIPPED 2025-09-09T14:59:01.5348302Z test/test_ops.py::TestOps::test_scaled_dot_product_int8_op_batch_size_56_n_head_2_q_seq_len_89_kv_seq_len_253_head_dim_64_mask_dtype0 SKIPPED 2025-09-09T14:59:01.5349411Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5350365Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5351325Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5352317Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x11008-tiles_2] SKIPPED 2025-09-09T14:59:01.5353275Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x11008-tiles_4] SKIPPED 2025-09-09T14:59:01.5354236Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x11008-tiles_8] SKIPPED 2025-09-09T14:59:01.5355266Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_11008x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5356283Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_11008x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5357248Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_11008x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5358199Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x14336-tiles_2] SKIPPED 2025-09-09T14:59:01.5359168Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x14336-tiles_4] SKIPPED 2025-09-09T14:59:01.5360139Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_4096x14336-tiles_8] SKIPPED 2025-09-09T14:59:01.5361093Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_14336x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5362058Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_14336x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5717668Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_correctness[shape_14336x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5719032Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5719915Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5720802Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5721743Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x11008-tiles_2] SKIPPED 2025-09-09T14:59:01.5722619Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x11008-tiles_4] SKIPPED 2025-09-09T14:59:01.5723485Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x11008-tiles_8] SKIPPED 2025-09-09T14:59:01.5724426Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_11008x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5725288Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_11008x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5726147Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_11008x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5727193Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x14336-tiles_2] SKIPPED 2025-09-09T14:59:01.5728063Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x14336-tiles_4] SKIPPED 2025-09-09T14:59:01.5728921Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_4096x14336-tiles_8] SKIPPED 2025-09-09T14:59:01.5729870Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_14336x4096-tiles_2] SKIPPED 2025-09-09T14:59:01.5730728Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_14336x4096-tiles_4] SKIPPED 2025-09-09T14:59:01.5731599Z test/test_ops.py::test_unpack_tensor_core_tiled_layout_op[shape_14336x4096-tiles_8] SKIPPED 2025-09-09T14:59:01.5732674Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.5733745Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.5734821Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.5736046Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.5737135Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.5738262Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.5739380Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.5740535Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.5741700Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.5742755Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.5744007Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.5745138Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.5746196Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-2-32] SKIPPED 2025-09-09T14:59:01.5747321Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-2-64] SKIPPED 2025-09-09T14:59:01.5748412Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-2-128] SKIPPED 2025-09-09T14:59:01.5749472Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-2-256] SKIPPED 2025-09-09T14:59:01.5750598Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-4-32] SKIPPED 2025-09-09T14:59:01.5751834Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-4-64] SKIPPED 2025-09-09T14:59:01.5752998Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-4-128] SKIPPED 2025-09-09T14:59:01.5754081Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-4-256] SKIPPED 2025-09-09T14:59:01.5755253Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-8-32] SKIPPED 2025-09-09T14:59:01.5756417Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-8-64] SKIPPED 2025-09-09T14:59:01.5757549Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-8-128] SKIPPED 2025-09-09T14:59:01.5758641Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 11008)-8-256] SKIPPED 2025-09-09T14:59:01.5759790Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.5760868Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.5762030Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.5763115Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.5764198Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.5765370Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.5766507Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.5767594Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.5768661Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.5769884Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.5771026Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.5772103Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(11008, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.5773268Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-2-32] SKIPPED 2025-09-09T14:59:01.5774341Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-2-64] SKIPPED 2025-09-09T14:59:01.5775511Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-2-128] SKIPPED 2025-09-09T14:59:01.5776606Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-2-256] SKIPPED 2025-09-09T14:59:01.5777674Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-4-32] SKIPPED 2025-09-09T14:59:01.5778892Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-4-64] SKIPPED 2025-09-09T14:59:01.5779967Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-4-128] SKIPPED 2025-09-09T14:59:01.5781132Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-4-256] SKIPPED 2025-09-09T14:59:01.5782227Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-8-32] SKIPPED 2025-09-09T14:59:01.5783295Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-8-64] SKIPPED 2025-09-09T14:59:01.5784468Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-8-128] SKIPPED 2025-09-09T14:59:01.5785563Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(4096, 14336)-8-256] SKIPPED 2025-09-09T14:59:01.5786706Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.5787836Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.5788902Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.5790073Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.5791153Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6103198Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6104522Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6105590Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6106639Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6108048Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6109093Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6110426Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_quant_dequant[(14336, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6111511Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6112680Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6113938Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6115264Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6116508Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6117631Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6118734Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6120170Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6121385Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6122700Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6123825Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6125062Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6126189Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-2-32] SKIPPED 2025-09-09T14:59:01.6127305Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-2-64] SKIPPED 2025-09-09T14:59:01.6128561Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-2-128] SKIPPED 2025-09-09T14:59:01.6129769Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-2-256] SKIPPED 2025-09-09T14:59:01.6131030Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-4-32] SKIPPED 2025-09-09T14:59:01.6132150Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-4-64] SKIPPED 2025-09-09T14:59:01.6133389Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-4-128] SKIPPED 2025-09-09T14:59:01.6134533Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-4-256] SKIPPED 2025-09-09T14:59:01.6135663Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-8-32] SKIPPED 2025-09-09T14:59:01.6136934Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-8-64] SKIPPED 2025-09-09T14:59:01.6138050Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-8-128] SKIPPED 2025-09-09T14:59:01.6139404Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 11008)-8-256] SKIPPED 2025-09-09T14:59:01.6140540Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6141669Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6142939Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6144054Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6145373Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6146513Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6147631Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6148896Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6150016Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6151266Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6152443Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6153585Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(11008, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6154913Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-2-32] SKIPPED 2025-09-09T14:59:01.6156035Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-2-64] SKIPPED 2025-09-09T14:59:01.6157304Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-2-128] SKIPPED 2025-09-09T14:59:01.6158445Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-2-256] SKIPPED 2025-09-09T14:59:01.6159584Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-4-32] SKIPPED 2025-09-09T14:59:01.6160887Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-4-64] SKIPPED 2025-09-09T14:59:01.6162018Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-4-128] SKIPPED 2025-09-09T14:59:01.6163269Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-4-256] SKIPPED 2025-09-09T14:59:01.6164396Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-8-32] SKIPPED 2025-09-09T14:59:01.6165512Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-8-64] SKIPPED 2025-09-09T14:59:01.6166638Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-8-128] SKIPPED 2025-09-09T14:59:01.6167906Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(4096, 14336)-8-256] SKIPPED 2025-09-09T14:59:01.6169035Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6170159Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6171461Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6172600Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6173859Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6174983Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6176263Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6177434Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6178554Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6619557Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6620716Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6621908Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_correctness_unpack_and_dequant[(14336, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6623057Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6624010Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6624857Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6625687Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6626551Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6627470Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6628290Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6629125Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6629949Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6630849Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6631817Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6632654Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6633498Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-2-32] SKIPPED 2025-09-09T14:59:01.6634500Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-2-64] SKIPPED 2025-09-09T14:59:01.6635598Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-2-128] SKIPPED 2025-09-09T14:59:01.6636475Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-2-256] SKIPPED 2025-09-09T14:59:01.6637340Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-4-32] SKIPPED 2025-09-09T14:59:01.6638335Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-4-64] SKIPPED 2025-09-09T14:59:01.6639192Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-4-128] SKIPPED 2025-09-09T14:59:01.6640145Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-4-256] SKIPPED 2025-09-09T14:59:01.6641145Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-8-32] SKIPPED 2025-09-09T14:59:01.6641994Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-8-64] SKIPPED 2025-09-09T14:59:01.6642863Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-8-128] SKIPPED 2025-09-09T14:59:01.6643841Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 11008)-8-256] SKIPPED 2025-09-09T14:59:01.6644786Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6645669Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6646654Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6647532Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6648391Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6649339Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6650217Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6651074Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6651988Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6652970Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6653834Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6654703Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(11008, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6655684Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-2-32] SKIPPED 2025-09-09T14:59:01.6656543Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-2-64] SKIPPED 2025-09-09T14:59:01.6657394Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-2-128] SKIPPED 2025-09-09T14:59:01.6658387Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-2-256] SKIPPED 2025-09-09T14:59:01.6659254Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-4-32] SKIPPED 2025-09-09T14:59:01.6660155Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-4-64] SKIPPED 2025-09-09T14:59:01.6661152Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-4-128] SKIPPED 2025-09-09T14:59:01.6662017Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-4-256] SKIPPED 2025-09-09T14:59:01.6662881Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-8-32] SKIPPED 2025-09-09T14:59:01.6663869Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-8-64] SKIPPED 2025-09-09T14:59:01.6664727Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-8-128] SKIPPED 2025-09-09T14:59:01.6665591Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(4096, 14336)-8-256] SKIPPED 2025-09-09T14:59:01.6666515Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-2-32] SKIPPED 2025-09-09T14:59:01.6667432Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-2-64] SKIPPED 2025-09-09T14:59:01.6668305Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-2-128] SKIPPED 2025-09-09T14:59:01.6669311Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-2-256] SKIPPED 2025-09-09T14:59:01.6670186Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-4-32] SKIPPED 2025-09-09T14:59:01.6671033Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-4-64] SKIPPED 2025-09-09T14:59:01.6671898Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-4-128] SKIPPED 2025-09-09T14:59:01.6672904Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-4-256] SKIPPED 2025-09-09T14:59:01.6673818Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-8-32] SKIPPED 2025-09-09T14:59:01.6674750Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-8-64] SKIPPED 2025-09-09T14:59:01.6675750Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-8-128] SKIPPED 2025-09-09T14:59:01.6676720Z test/test_ops.py::test_dequantize_tensor_core_tiled_layout_op[(14336, 4096)-8-256] SKIPPED 2025-09-09T14:59:01.6677512Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.6678326Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.6679016Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.6679677Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.6680339Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.6681169Z test/test_ops.py::test_marlin_24[1-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.6681846Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.6682488Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.6683118Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.6683887Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.6684566Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.6685232Z test/test_ops.py::test_marlin_24[1-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.6685907Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.6686695Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.6687390Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.6688124Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.6688787Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.6689461Z test/test_ops.py::test_marlin_24[1-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.6690148Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.6690787Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.6691410Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.6692065Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.6692786Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7234241Z test/test_ops.py::test_marlin_24[1-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7235054Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7235721Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7236390Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7237031Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7237894Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7238547Z test/test_ops.py::test_marlin_24[4-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7239182Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7239812Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7240424Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7241132Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7241770Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7242425Z test/test_ops.py::test_marlin_24[4-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7243083Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7243737Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7244397Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7245032Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7245703Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7246360Z test/test_ops.py::test_marlin_24[4-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7247116Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7247826Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7248475Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7249121Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7249974Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7250633Z test/test_ops.py::test_marlin_24[4-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7251311Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7251982Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7252671Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7253340Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7253993Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7254711Z test/test_ops.py::test_marlin_24[8-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7255354Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7256001Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7256638Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7257302Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7257981Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7258635Z test/test_ops.py::test_marlin_24[8-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7259313Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7259980Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7260671Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7261353Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7262005Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7262721Z test/test_ops.py::test_marlin_24[8-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7263366Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7264006Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7264638Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7265297Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7265968Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7266668Z test/test_ops.py::test_marlin_24[8-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7267330Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7267956Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7268600Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7269259Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7269911Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7270583Z test/test_ops.py::test_marlin_24[16-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7271233Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7271891Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7272533Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7273249Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7273947Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7274702Z test/test_ops.py::test_marlin_24[16-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7275383Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7276017Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7276660Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7277309Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7277974Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7278635Z test/test_ops.py::test_marlin_24[16-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7279288Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7279996Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7280634Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7281304Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7281988Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7282648Z test/test_ops.py::test_marlin_24[16-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7283312Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7283940Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7284583Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7285229Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7285903Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7286563Z test/test_ops.py::test_marlin_24[32-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7287203Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7287891Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7288528Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7289188Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7289848Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7290518Z test/test_ops.py::test_marlin_24[32-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7291178Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7291851Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7292488Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7293128Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7293792Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7294462Z test/test_ops.py::test_marlin_24[32-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7295107Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7295757Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7296392Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7297053Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7297717Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7298430Z test/test_ops.py::test_marlin_24[32-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7299088Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7299728Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7300369Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7301011Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7301674Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7302326Z test/test_ops.py::test_marlin_24[64-128-512-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7847331Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7848009Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7848661Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7849473Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7850117Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7850766Z test/test_ops.py::test_marlin_24[64-128-512-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7851396Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7864461Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7865130Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7865799Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7866474Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7867137Z test/test_ops.py::test_marlin_24[64-128-512-8--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7867804Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7868435Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7869089Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7870523Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7871194Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7871869Z test/test_ops.py::test_marlin_24[64-128-512-8-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7872536Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7873227Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7873902Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7874752Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7875431Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7876077Z test/test_ops.py::test_marlin_qqq[1-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7876731Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7877368Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7878014Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7878681Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7879337Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7880007Z test/test_ops.py::test_marlin_qqq[1-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7880710Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7881368Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7882003Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7882658Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7883329Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7883987Z test/test_ops.py::test_marlin_qqq[1-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7884656Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7885295Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7885950Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7886604Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7887281Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7888005Z test/test_ops.py::test_marlin_qqq[1-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7888654Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7889301Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7889932Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7890592Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7891262Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7891916Z test/test_ops.py::test_marlin_qqq[1-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7892580Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7893224Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7893878Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7894530Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7895209Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7895920Z test/test_ops.py::test_marlin_qqq[1-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7896592Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7897286Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7897962Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7898637Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7899283Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7899988Z test/test_ops.py::test_marlin_qqq[4-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7900643Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7901270Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7901919Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7902561Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7903231Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7903898Z test/test_ops.py::test_marlin_qqq[4-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7904541Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7905185Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7905855Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7906515Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7907225Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7907879Z test/test_ops.py::test_marlin_qqq[4-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7908545Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7909181Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7909843Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7910814Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7911486Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7912163Z test/test_ops.py::test_marlin_qqq[4-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7912816Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7913556Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7914190Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7914936Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7915611Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7916264Z test/test_ops.py::test_marlin_qqq[4-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7916932Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7917577Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7918237Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7918901Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7919589Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7920268Z test/test_ops.py::test_marlin_qqq[4-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7920940Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(1, 1, 1)] SKIPPED (...) 2025-09-09T14:59:01.7921696Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(1, 4, 8)] SKIPPED (...) 2025-09-09T14:59:01.7922378Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(1, 7, 5)] SKIPPED (...) 2025-09-09T14:59:01.7923056Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7923723Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7924374Z test/test_ops.py::test_marlin_qqq[8-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7925084Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.7925724Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.7926373Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.7927019Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.7927696Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.7928376Z test/test_ops.py::test_marlin_qqq[8-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.7929016Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8454753Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8455400Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8456065Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8456884Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8457552Z test/test_ops.py::test_marlin_qqq[8-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8458385Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8459004Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8459646Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8460281Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8460944Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8461602Z test/test_ops.py::test_marlin_qqq[8-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8462231Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8462865Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8463483Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8464191Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8464835Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8465468Z test/test_ops.py::test_marlin_qqq[8-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8466108Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8466731Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8467365Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8467998Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8468651Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8469313Z test/test_ops.py::test_marlin_qqq[8-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8469944Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8470570Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8471182Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8471869Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8472515Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8473160Z test/test_ops.py::test_marlin_qqq[16-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8473796Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8474418Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8475392Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8476053Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8476726Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8477396Z test/test_ops.py::test_marlin_qqq[16-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8478047Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8478697Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8479335Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8479994Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8480651Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8481320Z test/test_ops.py::test_marlin_qqq[16-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8482057Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8482711Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8483373Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8484031Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8484717Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8485400Z test/test_ops.py::test_marlin_qqq[16-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8486052Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8486702Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8487341Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8488009Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8488752Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8489408Z test/test_ops.py::test_marlin_qqq[16-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8490070Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8490717Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8491375Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8492031Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8492715Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8493398Z test/test_ops.py::test_marlin_qqq[16-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8494049Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8494701Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8495338Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8495994Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8496663Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8497359Z test/test_ops.py::test_marlin_qqq[32-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8498019Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8498660Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8499312Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8499960Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8500681Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8501355Z test/test_ops.py::test_marlin_qqq[32-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8502004Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8502658Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8503294Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8503958Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8504628Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8505310Z test/test_ops.py::test_marlin_qqq[32-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8505960Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8506624Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8507309Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8507996Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8508686Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8509361Z test/test_ops.py::test_marlin_qqq[32-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8510291Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8510950Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8511609Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8512260Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8512935Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8513618Z test/test_ops.py::test_marlin_qqq[32-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8514361Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8515102Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8515751Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8516431Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8517117Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8517791Z test/test_ops.py::test_marlin_qqq[32-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8518451Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8519077Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.8519722Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.8520371Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.8521036Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.8521696Z test/test_ops.py::test_marlin_qqq[64-128-64-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.8522397Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.8523052Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.9024853Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.9025530Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.9026207Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.9026865Z test/test_ops.py::test_marlin_qqq[64-128-64-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.9027695Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.9028511Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.9029144Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.9029776Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.9030434Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.9031087Z test/test_ops.py::test_marlin_qqq[64-128-128-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.9031720Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.9032367Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.9032999Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.9033716Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.9034391Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.9035119Z test/test_ops.py::test_marlin_qqq[64-128-128-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.9035764Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.9036385Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.9037017Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.9037646Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.9038298Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.9038947Z test/test_ops.py::test_marlin_qqq[64-128-256-4--1-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.9039576Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(1, 1, 1)] SKIPPED 2025-09-09T14:59:01.9040222Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(1, 4, 8)] SKIPPED 2025-09-09T14:59:01.9040909Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(1, 7, 5)] SKIPPED 2025-09-09T14:59:01.9041563Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(13, 17, 67)] SKIPPED 2025-09-09T14:59:01.9042262Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(26, 37, 13)] SKIPPED 2025-09-09T14:59:01.9042912Z test/test_ops.py::test_marlin_qqq[64-128-256-4-128-(67, 13, 11)] SKIPPED 2025-09-09T14:59:01.9043556Z test/test_ops.py::test_swizzle_mm SKIPPED (ROCm not available) 2025-09-09T14:59:01.9044234Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9045125Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9045876Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9046616Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9047366Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9048092Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9048828Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9049615Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9050345Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9051086Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9051811Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9052556Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-2-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9053345Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9054075Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9054836Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9055590Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9056354Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9057119Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-128-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9057865Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9058622Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9059378Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9060196Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9060965Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9061738Z test/test_ops.py::test_scaled_embedding_bag_cpu[1-1024-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9062494Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9063214Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9063951Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9064687Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9065433Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9066178Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9066904Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9067683Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9068412Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9069166Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9069895Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9070775Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-2-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9071526Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9072259Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9073014Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9073772Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9074611Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9075386Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-128-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9076186Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9076951Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9077706Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9078489Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9079255Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9080110Z test/test_ops.py::test_scaled_embedding_bag_cpu[2-1024-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9080869Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9081618Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9082366Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9083108Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9083862Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9084615Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9085338Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9086072Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9086843Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9087598Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9088474Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9089186Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-2-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9090087Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9090799Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9091531Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9092265Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9093003Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9093747Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-128-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9094509Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9095243Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9447092Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9447950Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9448702Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9449440Z test/test_ops.py::test_scaled_embedding_bag_cpu[3-1024-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9450176Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9450881Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9451617Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9452352Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9453069Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9453968Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9454685Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9455436Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9456167Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9456888Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9457616Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9458397Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-2-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9459128Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9459858Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9460767Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9461546Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9462313Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9463085Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-128-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9463843Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-1-torch.int64] SKIPPED 2025-09-09T14:59:01.9464610Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-1-torch.int32] SKIPPED 2025-09-09T14:59:01.9465456Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-128-torch.int64] SKIPPED 2025-09-09T14:59:01.9466233Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-128-torch.int32] SKIPPED 2025-09-09T14:59:01.9467013Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-512-torch.int64] SKIPPED 2025-09-09T14:59:01.9467783Z test/test_ops.py::test_scaled_embedding_bag_cpu[10-1024-512-torch.int32] SKIPPED 2025-09-09T14:59:01.9468804Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype0-1-size_mnk0-False] SKIPPED 2025-09-09T14:59:01.9469984Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype1-1-size_mnk1-True] SKIPPED 2025-09-09T14:59:01.9471151Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype2-1-size_mnk2-False] SKIPPED 2025-09-09T14:59:01.9472323Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype3-1-size_mnk3-True] SKIPPED 2025-09-09T14:59:01.9473549Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype4-1-size_mnk4-False] SKIPPED 2025-09-09T14:59:01.9474818Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype5-1-size_mnk5-True] SKIPPED 2025-09-09T14:59:01.9476001Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype6-1-size_mnk6-False] SKIPPED 2025-09-09T14:59:01.9477149Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype7-1-size_mnk7-True] SKIPPED 2025-09-09T14:59:01.9478323Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype8-1-size_mnk8-False] SKIPPED 2025-09-09T14:59:01.9479504Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype9-1-size_mnk9-True] SKIPPED 2025-09-09T14:59:01.9480683Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype10-1-size_mnk10-False] SKIPPED 2025-09-09T14:59:01.9481874Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype11-1-size_mnk11-True] SKIPPED 2025-09-09T14:59:01.9483100Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype12-4-size_mnk12-False] SKIPPED 2025-09-09T14:59:01.9484296Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype13-4-size_mnk13-True] SKIPPED 2025-09-09T14:59:01.9485485Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype14-4-size_mnk14-False] SKIPPED 2025-09-09T14:59:01.9486659Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype15-4-size_mnk15-True] SKIPPED 2025-09-09T14:59:01.9487888Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype16-4-size_mnk16-False] SKIPPED 2025-09-09T14:59:01.9489076Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype17-4-size_mnk17-True] SKIPPED 2025-09-09T14:59:01.9490253Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype18-4-size_mnk18-False] SKIPPED 2025-09-09T14:59:01.9491432Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype19-4-size_mnk19-True] SKIPPED 2025-09-09T14:59:01.9492599Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype20-4-size_mnk20-False] SKIPPED 2025-09-09T14:59:01.9493784Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype21-4-size_mnk21-True] SKIPPED 2025-09-09T14:59:01.9495008Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype22-4-size_mnk22-False] SKIPPED 2025-09-09T14:59:01.9496190Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype23-4-size_mnk23-True] SKIPPED 2025-09-09T14:59:01.9497379Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype24-8-size_mnk24-False] SKIPPED 2025-09-09T14:59:01.9498557Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype25-8-size_mnk25-True] SKIPPED 2025-09-09T14:59:01.9499751Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype26-8-size_mnk26-False] SKIPPED 2025-09-09T14:59:01.9500932Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype27-8-size_mnk27-True] SKIPPED 2025-09-09T14:59:01.9502110Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype28-8-size_mnk28-False] SKIPPED 2025-09-09T14:59:01.9503341Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype29-8-size_mnk29-True] SKIPPED 2025-09-09T14:59:01.9504533Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype30-8-size_mnk30-False] SKIPPED 2025-09-09T14:59:01.9505707Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype31-8-size_mnk31-True] SKIPPED 2025-09-09T14:59:01.9506910Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype32-8-size_mnk32-False] SKIPPED 2025-09-09T14:59:01.9508078Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype33-8-size_mnk33-True] SKIPPED 2025-09-09T14:59:01.9509273Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype34-8-size_mnk34-False] SKIPPED 2025-09-09T14:59:01.9510725Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype35-8-size_mnk35-True] SKIPPED 2025-09-09T14:59:01.9511912Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype36-16-size_mnk36-False] SKIPPED 2025-09-09T14:59:01.9513197Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype37-16-size_mnk37-True] SKIPPED 2025-09-09T14:59:01.9514409Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype38-16-size_mnk38-False] SKIPPED 2025-09-09T14:59:01.9515672Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype39-16-size_mnk39-True] SKIPPED 2025-09-09T14:59:01.9516873Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype40-16-size_mnk40-False] SKIPPED 2025-09-09T14:59:01.9518292Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype41-16-size_mnk41-True] SKIPPED 2025-09-09T14:59:01.9519459Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype42-16-size_mnk42-False] SKIPPED 2025-09-09T14:59:01.9804917Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype43-16-size_mnk43-True] SKIPPED 2025-09-09T14:59:01.9806107Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype44-16-size_mnk44-False] SKIPPED 2025-09-09T14:59:01.9807279Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype45-16-size_mnk45-True] SKIPPED 2025-09-09T14:59:01.9808446Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype46-16-size_mnk46-False] SKIPPED 2025-09-09T14:59:01.9809693Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype47-16-size_mnk47-True] SKIPPED 2025-09-09T14:59:01.9811091Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype48-32-size_mnk48-False] SKIPPED 2025-09-09T14:59:01.9812260Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype49-32-size_mnk49-True] SKIPPED 2025-09-09T14:59:01.9813419Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype50-32-size_mnk50-False] SKIPPED 2025-09-09T14:59:01.9814589Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype51-32-size_mnk51-True] SKIPPED 2025-09-09T14:59:01.9815733Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype52-32-size_mnk52-False] SKIPPED 2025-09-09T14:59:01.9816895Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype53-32-size_mnk53-True] SKIPPED 2025-09-09T14:59:01.9818069Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype54-32-size_mnk54-False] SKIPPED 2025-09-09T14:59:01.9819297Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype55-32-size_mnk55-True] SKIPPED 2025-09-09T14:59:01.9820467Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype56-32-size_mnk56-False] SKIPPED 2025-09-09T14:59:01.9821631Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype57-32-size_mnk57-True] SKIPPED 2025-09-09T14:59:01.9822873Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype58-32-size_mnk58-False] SKIPPED 2025-09-09T14:59:01.9824040Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype59-32-size_mnk59-True] SKIPPED 2025-09-09T14:59:01.9825190Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype60-64-size_mnk60-False] SKIPPED 2025-09-09T14:59:01.9826365Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype61-64-size_mnk61-True] SKIPPED 2025-09-09T14:59:01.9827616Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype62-64-size_mnk62-False] SKIPPED 2025-09-09T14:59:01.9828771Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype63-64-size_mnk63-True] SKIPPED 2025-09-09T14:59:01.9829935Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype64-64-size_mnk64-False] SKIPPED 2025-09-09T14:59:01.9831085Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype65-64-size_mnk65-True] SKIPPED 2025-09-09T14:59:01.9832373Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype66-64-size_mnk66-False] SKIPPED 2025-09-09T14:59:01.9833534Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype67-64-size_mnk67-True] SKIPPED 2025-09-09T14:59:01.9834759Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype68-64-size_mnk68-False] SKIPPED 2025-09-09T14:59:01.9835931Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype69-64-size_mnk69-True] SKIPPED 2025-09-09T14:59:01.9837093Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype70-64-size_mnk70-False] SKIPPED 2025-09-09T14:59:01.9838239Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype71-64-size_mnk71-True] SKIPPED 2025-09-09T14:59:01.9839396Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype72-1-size_mnk72-False] SKIPPED 2025-09-09T14:59:01.9840586Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype73-1-size_mnk73-True] SKIPPED 2025-09-09T14:59:01.9841750Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype74-1-size_mnk74-False] SKIPPED 2025-09-09T14:59:01.9842910Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype75-1-size_mnk75-True] SKIPPED 2025-09-09T14:59:01.9844062Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype76-1-size_mnk76-False] SKIPPED 2025-09-09T14:59:01.9845224Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype77-1-size_mnk77-True] SKIPPED 2025-09-09T14:59:01.9846384Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype78-1-size_mnk78-False] SKIPPED 2025-09-09T14:59:01.9847537Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype79-1-size_mnk79-True] SKIPPED 2025-09-09T14:59:01.9848743Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype80-1-size_mnk80-False] SKIPPED 2025-09-09T14:59:01.9849887Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype81-1-size_mnk81-True] SKIPPED 2025-09-09T14:59:01.9851043Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype82-1-size_mnk82-False] SKIPPED 2025-09-09T14:59:01.9852199Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype83-1-size_mnk83-True] SKIPPED 2025-09-09T14:59:01.9853344Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype84-4-size_mnk84-False] SKIPPED 2025-09-09T14:59:01.9854502Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype85-4-size_mnk85-True] SKIPPED 2025-09-09T14:59:01.9855643Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype86-4-size_mnk86-False] SKIPPED 2025-09-09T14:59:01.9856789Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype87-4-size_mnk87-True] SKIPPED 2025-09-09T14:59:01.9857980Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype88-4-size_mnk88-False] SKIPPED 2025-09-09T14:59:01.9859123Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype89-4-size_mnk89-True] SKIPPED 2025-09-09T14:59:01.9860271Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype90-4-size_mnk90-False] SKIPPED 2025-09-09T14:59:01.9861430Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype91-4-size_mnk91-True] SKIPPED 2025-09-09T14:59:01.9862612Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype92-4-size_mnk92-False] SKIPPED 2025-09-09T14:59:01.9863776Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype93-4-size_mnk93-True] SKIPPED 2025-09-09T14:59:01.9864918Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype94-4-size_mnk94-False] SKIPPED 2025-09-09T14:59:01.9866077Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype95-4-size_mnk95-True] SKIPPED 2025-09-09T14:59:01.9867230Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype96-8-size_mnk96-False] SKIPPED 2025-09-09T14:59:01.9868374Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype97-8-size_mnk97-True] SKIPPED 2025-09-09T14:59:01.9869564Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype98-8-size_mnk98-False] SKIPPED 2025-09-09T14:59:01.9870719Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype99-8-size_mnk99-True] SKIPPED 2025-09-09T14:59:01.9871875Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype100-8-size_mnk100-False] SKIPPED 2025-09-09T14:59:01.9873045Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype101-8-size_mnk101-True] SKIPPED 2025-09-09T14:59:01.9874205Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype102-8-size_mnk102-False] SKIPPED 2025-09-09T14:59:01.9875472Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype103-8-size_mnk103-True] SKIPPED 2025-09-09T14:59:02.0164513Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype104-8-size_mnk104-False] SKIPPED 2025-09-09T14:59:02.0165845Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype105-8-size_mnk105-True] SKIPPED 2025-09-09T14:59:02.0167021Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype106-8-size_mnk106-False] SKIPPED 2025-09-09T14:59:02.0168192Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype107-8-size_mnk107-True] SKIPPED 2025-09-09T14:59:02.0169356Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype108-16-size_mnk108-False] SKIPPED 2025-09-09T14:59:02.0170540Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype109-16-size_mnk109-True] SKIPPED 2025-09-09T14:59:02.0171711Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype110-16-size_mnk110-False] SKIPPED 2025-09-09T14:59:02.0172889Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype111-16-size_mnk111-True] SKIPPED 2025-09-09T14:59:02.0174065Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype112-16-size_mnk112-False] SKIPPED 2025-09-09T14:59:02.0175288Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype113-16-size_mnk113-True] SKIPPED 2025-09-09T14:59:02.0176468Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype114-16-size_mnk114-False] SKIPPED 2025-09-09T14:59:02.0177650Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype115-16-size_mnk115-True] SKIPPED 2025-09-09T14:59:02.0178822Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype116-16-size_mnk116-False] SKIPPED 2025-09-09T14:59:02.0180056Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype117-16-size_mnk117-True] SKIPPED 2025-09-09T14:59:02.0181226Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype118-16-size_mnk118-False] SKIPPED 2025-09-09T14:59:02.0182403Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype119-16-size_mnk119-True] SKIPPED 2025-09-09T14:59:02.0183581Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype120-32-size_mnk120-False] SKIPPED 2025-09-09T14:59:02.0184748Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype121-32-size_mnk121-True] SKIPPED 2025-09-09T14:59:02.0185925Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype122-32-size_mnk122-False] SKIPPED 2025-09-09T14:59:02.0187154Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype123-32-size_mnk123-True] SKIPPED 2025-09-09T14:59:02.0188327Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype124-32-size_mnk124-False] SKIPPED 2025-09-09T14:59:02.0189508Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype125-32-size_mnk125-True] SKIPPED 2025-09-09T14:59:02.0190676Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype126-32-size_mnk126-False] SKIPPED 2025-09-09T14:59:02.0191860Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype127-32-size_mnk127-True] SKIPPED 2025-09-09T14:59:02.0193056Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype128-32-size_mnk128-False] SKIPPED 2025-09-09T14:59:02.0194226Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype129-32-size_mnk129-True] SKIPPED 2025-09-09T14:59:02.0195624Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype130-32-size_mnk130-False] SKIPPED 2025-09-09T14:59:02.0196804Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype131-32-size_mnk131-True] SKIPPED 2025-09-09T14:59:02.0197973Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype132-64-size_mnk132-False] SKIPPED 2025-09-09T14:59:02.0199151Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype133-64-size_mnk133-True] SKIPPED 2025-09-09T14:59:02.0200336Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype134-64-size_mnk134-False] SKIPPED 2025-09-09T14:59:02.0201508Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype135-64-size_mnk135-True] SKIPPED 2025-09-09T14:59:02.0202686Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype136-64-size_mnk136-False] SKIPPED 2025-09-09T14:59:02.0203854Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype137-64-size_mnk137-True] SKIPPED 2025-09-09T14:59:02.0205073Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype138-64-size_mnk138-False] SKIPPED 2025-09-09T14:59:02.0206261Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype139-64-size_mnk139-True] SKIPPED 2025-09-09T14:59:02.0207427Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype140-64-size_mnk140-False] SKIPPED 2025-09-09T14:59:02.0208605Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype141-64-size_mnk141-True] SKIPPED 2025-09-09T14:59:02.0209816Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype142-64-size_mnk142-False] SKIPPED 2025-09-09T14:59:02.0211240Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s4s4[dtype143-64-size_mnk143-True] SKIPPED 2025-09-09T14:59:02.0212407Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype0-1-size_mnk0-False] SKIPPED 2025-09-09T14:59:02.0213540Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype1-1-size_mnk1-True] SKIPPED 2025-09-09T14:59:02.0214687Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype2-1-size_mnk2-False] SKIPPED 2025-09-09T14:59:02.0215828Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype3-1-size_mnk3-True] SKIPPED 2025-09-09T14:59:02.0217026Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype4-1-size_mnk4-False] SKIPPED 2025-09-09T14:59:02.0218179Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype5-1-size_mnk5-True] SKIPPED 2025-09-09T14:59:02.0219307Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype6-1-size_mnk6-False] SKIPPED 2025-09-09T14:59:02.0220449Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype7-1-size_mnk7-True] SKIPPED 2025-09-09T14:59:02.0221592Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype8-1-size_mnk8-False] SKIPPED 2025-09-09T14:59:02.0222721Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype9-1-size_mnk9-True] SKIPPED 2025-09-09T14:59:02.0223871Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype10-1-size_mnk10-False] SKIPPED 2025-09-09T14:59:02.0225111Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype11-1-size_mnk11-True] SKIPPED 2025-09-09T14:59:02.0226254Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype12-4-size_mnk12-False] SKIPPED 2025-09-09T14:59:02.0227412Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype13-4-size_mnk13-True] SKIPPED 2025-09-09T14:59:02.0228716Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype14-4-size_mnk14-False] SKIPPED 2025-09-09T14:59:02.0229872Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype15-4-size_mnk15-True] SKIPPED 2025-09-09T14:59:02.0231021Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype16-4-size_mnk16-False] SKIPPED 2025-09-09T14:59:02.0232163Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype17-4-size_mnk17-True] SKIPPED 2025-09-09T14:59:02.0233318Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype18-4-size_mnk18-False] SKIPPED 2025-09-09T14:59:02.0234567Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype19-4-size_mnk19-True] SKIPPED 2025-09-09T14:59:02.0235730Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype20-4-size_mnk20-False] SKIPPED 2025-09-09T14:59:02.0517712Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype21-4-size_mnk21-True] SKIPPED 2025-09-09T14:59:02.0518947Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype22-4-size_mnk22-False] SKIPPED 2025-09-09T14:59:02.0520284Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype23-4-size_mnk23-True] SKIPPED 2025-09-09T14:59:02.0521471Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype24-8-size_mnk24-False] SKIPPED 2025-09-09T14:59:02.0522644Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype25-8-size_mnk25-True] SKIPPED 2025-09-09T14:59:02.0523834Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype26-8-size_mnk26-False] SKIPPED 2025-09-09T14:59:02.0525024Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype27-8-size_mnk27-True] SKIPPED 2025-09-09T14:59:02.0526197Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype28-8-size_mnk28-False] SKIPPED 2025-09-09T14:59:02.0527449Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype29-8-size_mnk29-True] SKIPPED 2025-09-09T14:59:02.0528639Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype30-8-size_mnk30-False] SKIPPED 2025-09-09T14:59:02.0529831Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype31-8-size_mnk31-True] SKIPPED 2025-09-09T14:59:02.0531029Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype32-8-size_mnk32-False] SKIPPED 2025-09-09T14:59:02.0532199Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype33-8-size_mnk33-True] SKIPPED 2025-09-09T14:59:02.0533383Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype34-8-size_mnk34-False] SKIPPED 2025-09-09T14:59:02.0534575Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype35-8-size_mnk35-True] SKIPPED 2025-09-09T14:59:02.0535759Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype36-16-size_mnk36-False] SKIPPED 2025-09-09T14:59:02.0537005Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype37-16-size_mnk37-True] SKIPPED 2025-09-09T14:59:02.0538191Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype38-16-size_mnk38-False] SKIPPED 2025-09-09T14:59:02.0539393Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype39-16-size_mnk39-True] SKIPPED 2025-09-09T14:59:02.0540586Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype40-16-size_mnk40-False] SKIPPED 2025-09-09T14:59:02.0541775Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype41-16-size_mnk41-True] SKIPPED 2025-09-09T14:59:02.0542981Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype42-16-size_mnk42-False] SKIPPED 2025-09-09T14:59:02.0544183Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype43-16-size_mnk43-True] SKIPPED 2025-09-09T14:59:02.0545423Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype44-16-size_mnk44-False] SKIPPED 2025-09-09T14:59:02.0546623Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype45-16-size_mnk45-True] SKIPPED 2025-09-09T14:59:02.0547810Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype46-16-size_mnk46-False] SKIPPED 2025-09-09T14:59:02.0549002Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype47-16-size_mnk47-True] SKIPPED 2025-09-09T14:59:02.0550201Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype48-32-size_mnk48-False] SKIPPED 2025-09-09T14:59:02.0551423Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype49-32-size_mnk49-True] SKIPPED 2025-09-09T14:59:02.0552622Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype50-32-size_mnk50-False] SKIPPED 2025-09-09T14:59:02.0553809Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype51-32-size_mnk51-True] SKIPPED 2025-09-09T14:59:02.0555075Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype52-32-size_mnk52-False] SKIPPED 2025-09-09T14:59:02.0556278Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype53-32-size_mnk53-True] SKIPPED 2025-09-09T14:59:02.0557469Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype54-32-size_mnk54-False] SKIPPED 2025-09-09T14:59:02.0558714Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype55-32-size_mnk55-True] SKIPPED 2025-09-09T14:59:02.0559914Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype56-32-size_mnk56-False] SKIPPED 2025-09-09T14:59:02.0561096Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype57-32-size_mnk57-True] SKIPPED 2025-09-09T14:59:02.0562294Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype58-32-size_mnk58-False] SKIPPED 2025-09-09T14:59:02.0563481Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype59-32-size_mnk59-True] SKIPPED 2025-09-09T14:59:02.0564682Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype60-64-size_mnk60-False] SKIPPED 2025-09-09T14:59:02.0565874Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype61-64-size_mnk61-True] SKIPPED 2025-09-09T14:59:02.0567102Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype62-64-size_mnk62-False] SKIPPED 2025-09-09T14:59:02.0568463Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype63-64-size_mnk63-True] SKIPPED 2025-09-09T14:59:02.0569628Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype64-64-size_mnk64-False] SKIPPED 2025-09-09T14:59:02.0570776Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype65-64-size_mnk65-True] SKIPPED 2025-09-09T14:59:02.0571936Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype66-64-size_mnk66-False] SKIPPED 2025-09-09T14:59:02.0573086Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype67-64-size_mnk67-True] SKIPPED 2025-09-09T14:59:02.0574245Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype68-64-size_mnk68-False] SKIPPED 2025-09-09T14:59:02.0575547Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype69-64-size_mnk69-True] SKIPPED 2025-09-09T14:59:02.0576743Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype70-64-size_mnk70-False] SKIPPED 2025-09-09T14:59:02.0577907Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype71-64-size_mnk71-True] SKIPPED 2025-09-09T14:59:02.0579068Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype72-1-size_mnk72-False] SKIPPED 2025-09-09T14:59:02.0580209Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype73-1-size_mnk73-True] SKIPPED 2025-09-09T14:59:02.0581596Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype74-1-size_mnk74-False] SKIPPED 2025-09-09T14:59:02.0582771Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype75-1-size_mnk75-True] SKIPPED 2025-09-09T14:59:02.0583959Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype76-1-size_mnk76-False] SKIPPED 2025-09-09T14:59:02.0585147Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype77-1-size_mnk77-True] SKIPPED 2025-09-09T14:59:02.0586336Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype78-1-size_mnk78-False] SKIPPED 2025-09-09T14:59:02.0587533Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype79-1-size_mnk79-True] SKIPPED 2025-09-09T14:59:02.0588739Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype80-1-size_mnk80-False] SKIPPED 2025-09-09T14:59:02.0589943Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype81-1-size_mnk81-True] SKIPPED 2025-09-09T14:59:02.0879656Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype82-1-size_mnk82-False] SKIPPED 2025-09-09T14:59:02.0880958Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype83-1-size_mnk83-True] SKIPPED 2025-09-09T14:59:02.0882135Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype84-4-size_mnk84-False] SKIPPED 2025-09-09T14:59:02.0883288Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype85-4-size_mnk85-True] SKIPPED 2025-09-09T14:59:02.0884544Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype86-4-size_mnk86-False] SKIPPED 2025-09-09T14:59:02.0885861Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype87-4-size_mnk87-True] SKIPPED 2025-09-09T14:59:02.0887109Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype88-4-size_mnk88-False] SKIPPED 2025-09-09T14:59:02.0888288Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype89-4-size_mnk89-True] SKIPPED 2025-09-09T14:59:02.0889445Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype90-4-size_mnk90-False] SKIPPED 2025-09-09T14:59:02.0890697Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype91-4-size_mnk91-True] SKIPPED 2025-09-09T14:59:02.0891856Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype92-4-size_mnk92-False] SKIPPED 2025-09-09T14:59:02.0893122Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype93-4-size_mnk93-True] SKIPPED 2025-09-09T14:59:02.0894279Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype94-4-size_mnk94-False] SKIPPED 2025-09-09T14:59:02.0895515Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype95-4-size_mnk95-True] SKIPPED 2025-09-09T14:59:02.0896793Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype96-8-size_mnk96-False] SKIPPED 2025-09-09T14:59:02.0897938Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype97-8-size_mnk97-True] SKIPPED 2025-09-09T14:59:02.0899205Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype98-8-size_mnk98-False] SKIPPED 2025-09-09T14:59:02.0900411Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype99-8-size_mnk99-True] SKIPPED 2025-09-09T14:59:02.0901695Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype100-8-size_mnk100-False] SKIPPED 2025-09-09T14:59:02.0902888Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype101-8-size_mnk101-True] SKIPPED 2025-09-09T14:59:02.0904103Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype102-8-size_mnk102-False] SKIPPED 2025-09-09T14:59:02.0905337Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype103-8-size_mnk103-True] SKIPPED 2025-09-09T14:59:02.0906503Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype104-8-size_mnk104-False] SKIPPED 2025-09-09T14:59:02.0907856Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype105-8-size_mnk105-True] SKIPPED 2025-09-09T14:59:02.0909032Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype106-8-size_mnk106-False] SKIPPED 2025-09-09T14:59:02.0910506Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype107-8-size_mnk107-True] SKIPPED 2025-09-09T14:59:02.0911729Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype108-16-size_mnk108-False] SKIPPED 2025-09-09T14:59:02.0913046Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype109-16-size_mnk109-True] SKIPPED 2025-09-09T14:59:02.0914229Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype110-16-size_mnk110-False] SKIPPED 2025-09-09T14:59:02.0915498Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype111-16-size_mnk111-True] SKIPPED 2025-09-09T14:59:02.0916893Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype112-16-size_mnk112-False] SKIPPED 2025-09-09T14:59:02.0918062Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype113-16-size_mnk113-True] SKIPPED 2025-09-09T14:59:02.0919369Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype114-16-size_mnk114-False] SKIPPED 2025-09-09T14:59:02.0920533Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype115-16-size_mnk115-True] SKIPPED 2025-09-09T14:59:02.0921711Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype116-16-size_mnk116-False] SKIPPED 2025-09-09T14:59:02.0922888Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype117-16-size_mnk117-True] SKIPPED 2025-09-09T14:59:02.0924385Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype118-16-size_mnk118-False] SKIPPED 2025-09-09T14:59:02.0925600Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype119-16-size_mnk119-True] SKIPPED 2025-09-09T14:59:02.0927003Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype120-32-size_mnk120-False] SKIPPED 2025-09-09T14:59:02.0928220Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype121-32-size_mnk121-True] SKIPPED 2025-09-09T14:59:02.0929607Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype122-32-size_mnk122-False] SKIPPED 2025-09-09T14:59:02.0930898Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype123-32-size_mnk123-True] SKIPPED 2025-09-09T14:59:02.0932144Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype124-32-size_mnk124-False] SKIPPED 2025-09-09T14:59:02.0933460Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype125-32-size_mnk125-True] SKIPPED 2025-09-09T14:59:02.0934640Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype126-32-size_mnk126-False] SKIPPED 2025-09-09T14:59:02.0935936Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype127-32-size_mnk127-True] SKIPPED 2025-09-09T14:59:02.0937110Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype128-32-size_mnk128-False] SKIPPED 2025-09-09T14:59:02.0938414Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype129-32-size_mnk129-True] SKIPPED 2025-09-09T14:59:02.0939658Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype130-32-size_mnk130-False] SKIPPED 2025-09-09T14:59:02.0940830Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype131-32-size_mnk131-True] SKIPPED 2025-09-09T14:59:02.0942139Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype132-64-size_mnk132-False] SKIPPED 2025-09-09T14:59:02.0943324Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype133-64-size_mnk133-True] SKIPPED 2025-09-09T14:59:02.0944620Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype134-64-size_mnk134-False] SKIPPED 2025-09-09T14:59:02.0945805Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype135-64-size_mnk135-True] SKIPPED 2025-09-09T14:59:02.0947095Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype136-64-size_mnk136-False] SKIPPED 2025-09-09T14:59:02.0948338Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype137-64-size_mnk137-True] SKIPPED 2025-09-09T14:59:02.0949652Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype138-64-size_mnk138-False] SKIPPED 2025-09-09T14:59:02.0950882Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype139-64-size_mnk139-True] SKIPPED 2025-09-09T14:59:02.0952058Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype140-64-size_mnk140-False] SKIPPED 2025-09-09T14:59:02.0953369Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype141-64-size_mnk141-True] SKIPPED 2025-09-09T14:59:02.1180167Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype142-64-size_mnk142-False] SKIPPED 2025-09-09T14:59:02.1181391Z test/test_ops_rowwise_scaled_linear_cutlass.py::test_rowwise_scaled_linear_cutlass_s8s4[dtype143-64-size_mnk143-True] SKIPPED 2025-09-09T14:59:02.1182867Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype0-Xq_Wq_dtypes0-1-size_mnk0-False] SKIPPED 2025-09-09T14:59:02.1184420Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype1-Xq_Wq_dtypes1-1-size_mnk1-True] SKIPPED 2025-09-09T14:59:02.1185820Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype2-Xq_Wq_dtypes2-1-size_mnk2-False] SKIPPED 2025-09-09T14:59:02.1187204Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype3-Xq_Wq_dtypes3-1-size_mnk3-True] SKIPPED 2025-09-09T14:59:02.1188660Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype4-Xq_Wq_dtypes4-1-size_mnk4-False] SKIPPED 2025-09-09T14:59:02.1190040Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype5-Xq_Wq_dtypes5-1-size_mnk5-True] SKIPPED 2025-09-09T14:59:02.1191529Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype6-Xq_Wq_dtypes6-1-size_mnk6-False] SKIPPED 2025-09-09T14:59:02.1192915Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype7-Xq_Wq_dtypes7-1-size_mnk7-True] SKIPPED 2025-09-09T14:59:02.1194305Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype8-Xq_Wq_dtypes8-1-size_mnk8-False] SKIPPED 2025-09-09T14:59:02.1195748Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype9-Xq_Wq_dtypes9-1-size_mnk9-True] SKIPPED 2025-09-09T14:59:02.1197229Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype10-Xq_Wq_dtypes10-1-size_mnk10-False] SKIPPED 2025-09-09T14:59:02.1198651Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype11-Xq_Wq_dtypes11-1-size_mnk11-True] SKIPPED 2025-09-09T14:59:02.1200054Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype12-Xq_Wq_dtypes12-4-size_mnk12-False] SKIPPED 2025-09-09T14:59:02.1201793Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype13-Xq_Wq_dtypes13-4-size_mnk13-True] SKIPPED 2025-09-09T14:59:02.1203200Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype14-Xq_Wq_dtypes14-4-size_mnk14-False] SKIPPED 2025-09-09T14:59:02.1204617Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype15-Xq_Wq_dtypes15-4-size_mnk15-True] SKIPPED 2025-09-09T14:59:02.1206113Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype16-Xq_Wq_dtypes16-4-size_mnk16-False] SKIPPED 2025-09-09T14:59:02.1207529Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype17-Xq_Wq_dtypes17-4-size_mnk17-True] SKIPPED 2025-09-09T14:59:02.1208931Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype18-Xq_Wq_dtypes18-4-size_mnk18-False] SKIPPED 2025-09-09T14:59:02.1210562Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype19-Xq_Wq_dtypes19-4-size_mnk19-True] SKIPPED 2025-09-09T14:59:02.1211983Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype20-Xq_Wq_dtypes20-4-size_mnk20-False] SKIPPED 2025-09-09T14:59:02.1213397Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype21-Xq_Wq_dtypes21-4-size_mnk21-True] SKIPPED 2025-09-09T14:59:02.1214880Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype22-Xq_Wq_dtypes22-4-size_mnk22-False] SKIPPED 2025-09-09T14:59:02.1216275Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype23-Xq_Wq_dtypes23-4-size_mnk23-True] SKIPPED 2025-09-09T14:59:02.1217687Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype24-Xq_Wq_dtypes24-1-size_mnk24-False] SKIPPED 2025-09-09T14:59:02.1219098Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype25-Xq_Wq_dtypes25-1-size_mnk25-True] SKIPPED 2025-09-09T14:59:02.1220558Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype26-Xq_Wq_dtypes26-1-size_mnk26-False] SKIPPED 2025-09-09T14:59:02.1221967Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype27-Xq_Wq_dtypes27-1-size_mnk27-True] SKIPPED 2025-09-09T14:59:02.1223381Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype28-Xq_Wq_dtypes28-1-size_mnk28-False] SKIPPED 2025-09-09T14:59:02.1224772Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype29-Xq_Wq_dtypes29-1-size_mnk29-True] SKIPPED 2025-09-09T14:59:02.1226184Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype30-Xq_Wq_dtypes30-1-size_mnk30-False] SKIPPED 2025-09-09T14:59:02.1227666Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype31-Xq_Wq_dtypes31-1-size_mnk31-True] SKIPPED 2025-09-09T14:59:02.1229076Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype32-Xq_Wq_dtypes32-1-size_mnk32-False] SKIPPED 2025-09-09T14:59:02.1230484Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype33-Xq_Wq_dtypes33-1-size_mnk33-True] SKIPPED 2025-09-09T14:59:02.1232050Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype34-Xq_Wq_dtypes34-1-size_mnk34-False] SKIPPED 2025-09-09T14:59:02.1233457Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype35-Xq_Wq_dtypes35-1-size_mnk35-True] SKIPPED 2025-09-09T14:59:02.1234941Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype36-Xq_Wq_dtypes36-4-size_mnk36-False] SKIPPED 2025-09-09T14:59:02.1236399Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype37-Xq_Wq_dtypes37-4-size_mnk37-True] SKIPPED 2025-09-09T14:59:02.1237806Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype38-Xq_Wq_dtypes38-4-size_mnk38-False] SKIPPED 2025-09-09T14:59:02.1239216Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype39-Xq_Wq_dtypes39-4-size_mnk39-True] SKIPPED 2025-09-09T14:59:02.1240617Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype40-Xq_Wq_dtypes40-4-size_mnk40-False] SKIPPED 2025-09-09T14:59:02.1242039Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype41-Xq_Wq_dtypes41-4-size_mnk41-True] SKIPPED 2025-09-09T14:59:02.1243460Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype42-Xq_Wq_dtypes42-4-size_mnk42-False] SKIPPED 2025-09-09T14:59:02.1244858Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype43-Xq_Wq_dtypes43-4-size_mnk43-True] SKIPPED 2025-09-09T14:59:02.1246309Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype44-Xq_Wq_dtypes44-4-size_mnk44-False] SKIPPED 2025-09-09T14:59:02.1247713Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype45-Xq_Wq_dtypes45-4-size_mnk45-True] SKIPPED 2025-09-09T14:59:02.1249138Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype46-Xq_Wq_dtypes46-4-size_mnk46-False] SKIPPED 2025-09-09T14:59:02.1250559Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype47-Xq_Wq_dtypes47-4-size_mnk47-True] SKIPPED 2025-09-09T14:59:02.1252052Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype48-Xq_Wq_dtypes48-1-size_mnk48-False] SKIPPED 2025-09-09T14:59:02.1480104Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype49-Xq_Wq_dtypes49-1-size_mnk49-True] SKIPPED 2025-09-09T14:59:02.1481782Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype50-Xq_Wq_dtypes50-1-size_mnk50-False] SKIPPED 2025-09-09T14:59:02.1483242Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype51-Xq_Wq_dtypes51-1-size_mnk51-True] SKIPPED 2025-09-09T14:59:02.1484689Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype52-Xq_Wq_dtypes52-1-size_mnk52-False] SKIPPED 2025-09-09T14:59:02.1486348Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype53-Xq_Wq_dtypes53-1-size_mnk53-True] SKIPPED 2025-09-09T14:59:02.1487797Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype54-Xq_Wq_dtypes54-1-size_mnk54-False] SKIPPED 2025-09-09T14:59:02.1489255Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype55-Xq_Wq_dtypes55-1-size_mnk55-True] SKIPPED 2025-09-09T14:59:02.1490705Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype56-Xq_Wq_dtypes56-1-size_mnk56-False] SKIPPED 2025-09-09T14:59:02.1492154Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype57-Xq_Wq_dtypes57-1-size_mnk57-True] SKIPPED 2025-09-09T14:59:02.1493601Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype58-Xq_Wq_dtypes58-1-size_mnk58-False] SKIPPED 2025-09-09T14:59:02.1495101Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype59-Xq_Wq_dtypes59-1-size_mnk59-True] SKIPPED 2025-09-09T14:59:02.1496543Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype60-Xq_Wq_dtypes60-4-size_mnk60-False] SKIPPED 2025-09-09T14:59:02.1497993Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype61-Xq_Wq_dtypes61-4-size_mnk61-True] SKIPPED 2025-09-09T14:59:02.1499439Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype62-Xq_Wq_dtypes62-4-size_mnk62-False] SKIPPED 2025-09-09T14:59:02.1500873Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype63-Xq_Wq_dtypes63-4-size_mnk63-True] SKIPPED 2025-09-09T14:59:02.1502331Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype64-Xq_Wq_dtypes64-4-size_mnk64-False] SKIPPED 2025-09-09T14:59:02.1503826Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype65-Xq_Wq_dtypes65-4-size_mnk65-True] SKIPPED 2025-09-09T14:59:02.1505283Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype66-Xq_Wq_dtypes66-4-size_mnk66-False] SKIPPED 2025-09-09T14:59:02.1506737Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype67-Xq_Wq_dtypes67-4-size_mnk67-True] SKIPPED 2025-09-09T14:59:02.1508171Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype68-Xq_Wq_dtypes68-4-size_mnk68-False] SKIPPED 2025-09-09T14:59:02.1509686Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype69-Xq_Wq_dtypes69-4-size_mnk69-True] SKIPPED 2025-09-09T14:59:02.1511406Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype70-Xq_Wq_dtypes70-4-size_mnk70-False] SKIPPED 2025-09-09T14:59:02.1512847Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype71-Xq_Wq_dtypes71-4-size_mnk71-True] SKIPPED 2025-09-09T14:59:02.1514300Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype72-Xq_Wq_dtypes72-1-size_mnk72-False] SKIPPED 2025-09-09T14:59:02.1515837Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype73-Xq_Wq_dtypes73-1-size_mnk73-True] SKIPPED 2025-09-09T14:59:02.1517351Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype74-Xq_Wq_dtypes74-1-size_mnk74-False] SKIPPED 2025-09-09T14:59:02.1518816Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype75-Xq_Wq_dtypes75-1-size_mnk75-True] SKIPPED 2025-09-09T14:59:02.1520271Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype76-Xq_Wq_dtypes76-1-size_mnk76-False] SKIPPED 2025-09-09T14:59:02.1521866Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype77-Xq_Wq_dtypes77-1-size_mnk77-True] SKIPPED 2025-09-09T14:59:02.1523278Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype78-Xq_Wq_dtypes78-1-size_mnk78-False] SKIPPED 2025-09-09T14:59:02.1524678Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype79-Xq_Wq_dtypes79-1-size_mnk79-True] SKIPPED 2025-09-09T14:59:02.1526132Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype80-Xq_Wq_dtypes80-1-size_mnk80-False] SKIPPED 2025-09-09T14:59:02.1527533Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype81-Xq_Wq_dtypes81-1-size_mnk81-True] SKIPPED 2025-09-09T14:59:02.1528927Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype82-Xq_Wq_dtypes82-1-size_mnk82-False] SKIPPED 2025-09-09T14:59:02.1530343Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype83-Xq_Wq_dtypes83-1-size_mnk83-True] SKIPPED 2025-09-09T14:59:02.1531759Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype84-Xq_Wq_dtypes84-4-size_mnk84-False] SKIPPED 2025-09-09T14:59:02.1533159Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype85-Xq_Wq_dtypes85-4-size_mnk85-True] SKIPPED 2025-09-09T14:59:02.1534750Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype86-Xq_Wq_dtypes86-4-size_mnk86-False] SKIPPED 2025-09-09T14:59:02.1536244Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype87-Xq_Wq_dtypes87-4-size_mnk87-True] SKIPPED 2025-09-09T14:59:02.1537686Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype88-Xq_Wq_dtypes88-4-size_mnk88-False] SKIPPED 2025-09-09T14:59:02.1539142Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype89-Xq_Wq_dtypes89-4-size_mnk89-True] SKIPPED 2025-09-09T14:59:02.1540587Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype90-Xq_Wq_dtypes90-4-size_mnk90-False] SKIPPED 2025-09-09T14:59:02.1542095Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype91-Xq_Wq_dtypes91-4-size_mnk91-True] SKIPPED 2025-09-09T14:59:02.1543553Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype92-Xq_Wq_dtypes92-4-size_mnk92-False] SKIPPED 2025-09-09T14:59:02.1544992Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype93-Xq_Wq_dtypes93-4-size_mnk93-True] SKIPPED 2025-09-09T14:59:02.1546441Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype94-Xq_Wq_dtypes94-4-size_mnk94-False] SKIPPED 2025-09-09T14:59:02.1547892Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype95-Xq_Wq_dtypes95-4-size_mnk95-True] SKIPPED 2025-09-09T14:59:02.1549395Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype96-Xq_Wq_dtypes96-1-size_mnk96-False] SKIPPED 2025-09-09T14:59:02.1550859Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype97-Xq_Wq_dtypes97-1-size_mnk97-True] SKIPPED 2025-09-09T14:59:02.1552303Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype98-Xq_Wq_dtypes98-1-size_mnk98-False] SKIPPED 2025-09-09T14:59:02.1775091Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype99-Xq_Wq_dtypes99-1-size_mnk99-True] SKIPPED 2025-09-09T14:59:02.1776538Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype100-Xq_Wq_dtypes100-1-size_mnk100-False] SKIPPED 2025-09-09T14:59:02.1778005Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype101-Xq_Wq_dtypes101-1-size_mnk101-True] SKIPPED 2025-09-09T14:59:02.1779555Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype102-Xq_Wq_dtypes102-1-size_mnk102-False] SKIPPED 2025-09-09T14:59:02.1780999Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype103-Xq_Wq_dtypes103-1-size_mnk103-True] SKIPPED 2025-09-09T14:59:02.1782438Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype104-Xq_Wq_dtypes104-1-size_mnk104-False] SKIPPED 2025-09-09T14:59:02.1783869Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype105-Xq_Wq_dtypes105-1-size_mnk105-True] SKIPPED 2025-09-09T14:59:02.1785310Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype106-Xq_Wq_dtypes106-1-size_mnk106-False] SKIPPED 2025-09-09T14:59:02.1786737Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype107-Xq_Wq_dtypes107-1-size_mnk107-True] SKIPPED 2025-09-09T14:59:02.1788229Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype108-Xq_Wq_dtypes108-4-size_mnk108-False] SKIPPED 2025-09-09T14:59:02.1789667Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype109-Xq_Wq_dtypes109-4-size_mnk109-True] SKIPPED 2025-09-09T14:59:02.1791086Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype110-Xq_Wq_dtypes110-4-size_mnk110-False] SKIPPED 2025-09-09T14:59:02.1792526Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype111-Xq_Wq_dtypes111-4-size_mnk111-True] SKIPPED 2025-09-09T14:59:02.1794026Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype112-Xq_Wq_dtypes112-4-size_mnk112-False] SKIPPED 2025-09-09T14:59:02.1795552Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype113-Xq_Wq_dtypes113-4-size_mnk113-True] SKIPPED 2025-09-09T14:59:02.1796992Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype114-Xq_Wq_dtypes114-4-size_mnk114-False] SKIPPED 2025-09-09T14:59:02.1798437Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype115-Xq_Wq_dtypes115-4-size_mnk115-True] SKIPPED 2025-09-09T14:59:02.1799861Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype116-Xq_Wq_dtypes116-4-size_mnk116-False] SKIPPED 2025-09-09T14:59:02.1801355Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype117-Xq_Wq_dtypes117-4-size_mnk117-True] SKIPPED 2025-09-09T14:59:02.1802806Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype118-Xq_Wq_dtypes118-4-size_mnk118-False] SKIPPED 2025-09-09T14:59:02.1804231Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype119-Xq_Wq_dtypes119-4-size_mnk119-True] SKIPPED 2025-09-09T14:59:02.1805663Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype120-Xq_Wq_dtypes120-1-size_mnk120-False] SKIPPED 2025-09-09T14:59:02.1807110Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype121-Xq_Wq_dtypes121-1-size_mnk121-True] SKIPPED 2025-09-09T14:59:02.1808538Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype122-Xq_Wq_dtypes122-1-size_mnk122-False] SKIPPED 2025-09-09T14:59:02.1810287Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype123-Xq_Wq_dtypes123-1-size_mnk123-True] SKIPPED 2025-09-09T14:59:02.1811754Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype124-Xq_Wq_dtypes124-1-size_mnk124-False] SKIPPED 2025-09-09T14:59:02.1813187Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype125-Xq_Wq_dtypes125-1-size_mnk125-True] SKIPPED 2025-09-09T14:59:02.1814629Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype126-Xq_Wq_dtypes126-1-size_mnk126-False] SKIPPED 2025-09-09T14:59:02.1816063Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype127-Xq_Wq_dtypes127-1-size_mnk127-True] SKIPPED 2025-09-09T14:59:02.1817511Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype128-Xq_Wq_dtypes128-1-size_mnk128-False] SKIPPED 2025-09-09T14:59:02.1819045Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype129-Xq_Wq_dtypes129-1-size_mnk129-True] SKIPPED 2025-09-09T14:59:02.1820476Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype130-Xq_Wq_dtypes130-1-size_mnk130-False] SKIPPED 2025-09-09T14:59:02.1821923Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype131-Xq_Wq_dtypes131-1-size_mnk131-True] SKIPPED 2025-09-09T14:59:02.1823364Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype132-Xq_Wq_dtypes132-4-size_mnk132-False] SKIPPED 2025-09-09T14:59:02.1824842Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype133-Xq_Wq_dtypes133-4-size_mnk133-True] SKIPPED 2025-09-09T14:59:02.1826287Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype134-Xq_Wq_dtypes134-4-size_mnk134-False] SKIPPED 2025-09-09T14:59:02.1827749Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype135-Xq_Wq_dtypes135-4-size_mnk135-True] SKIPPED 2025-09-09T14:59:02.1829304Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype136-Xq_Wq_dtypes136-4-size_mnk136-False] SKIPPED 2025-09-09T14:59:02.1830755Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype137-Xq_Wq_dtypes137-4-size_mnk137-True] SKIPPED 2025-09-09T14:59:02.1832258Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype138-Xq_Wq_dtypes138-4-size_mnk138-False] SKIPPED 2025-09-09T14:59:02.1833707Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype139-Xq_Wq_dtypes139-4-size_mnk139-True] SKIPPED 2025-09-09T14:59:02.1835233Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype140-Xq_Wq_dtypes140-4-size_mnk140-False] SKIPPED 2025-09-09T14:59:02.1836827Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype141-Xq_Wq_dtypes141-4-size_mnk141-True] SKIPPED 2025-09-09T14:59:02.1838282Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype142-Xq_Wq_dtypes142-4-size_mnk142-False] SKIPPED 2025-09-09T14:59:02.1839734Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype143-Xq_Wq_dtypes143-4-size_mnk143-True] SKIPPED 2025-09-09T14:59:02.1841216Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype144-Xq_Wq_dtypes144-1-size_mnk144-False] SKIPPED 2025-09-09T14:59:02.1842668Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype145-Xq_Wq_dtypes145-1-size_mnk145-True] SKIPPED 2025-09-09T14:59:02.1844124Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype146-Xq_Wq_dtypes146-1-size_mnk146-False] SKIPPED 2025-09-09T14:59:02.1845554Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype147-Xq_Wq_dtypes147-1-size_mnk147-True] SKIPPED 2025-09-09T14:59:02.2295420Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype148-Xq_Wq_dtypes148-1-size_mnk148-False] SKIPPED 2025-09-09T14:59:02.2296948Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype149-Xq_Wq_dtypes149-1-size_mnk149-True] SKIPPED 2025-09-09T14:59:02.2298383Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype150-Xq_Wq_dtypes150-1-size_mnk150-False] SKIPPED 2025-09-09T14:59:02.2300011Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype151-Xq_Wq_dtypes151-1-size_mnk151-True] SKIPPED 2025-09-09T14:59:02.2301453Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype152-Xq_Wq_dtypes152-1-size_mnk152-False] SKIPPED 2025-09-09T14:59:02.2302878Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype153-Xq_Wq_dtypes153-1-size_mnk153-True] SKIPPED 2025-09-09T14:59:02.2304923Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype154-Xq_Wq_dtypes154-1-size_mnk154-False] SKIPPED 2025-09-09T14:59:02.2306556Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype155-Xq_Wq_dtypes155-1-size_mnk155-True] SKIPPED 2025-09-09T14:59:02.2307984Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype156-Xq_Wq_dtypes156-4-size_mnk156-False] SKIPPED 2025-09-09T14:59:02.2309413Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype157-Xq_Wq_dtypes157-4-size_mnk157-True] SKIPPED 2025-09-09T14:59:02.2311075Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype158-Xq_Wq_dtypes158-4-size_mnk158-False] SKIPPED 2025-09-09T14:59:02.2312732Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype159-Xq_Wq_dtypes159-4-size_mnk159-True] SKIPPED 2025-09-09T14:59:02.2314620Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype160-Xq_Wq_dtypes160-4-size_mnk160-False] SKIPPED 2025-09-09T14:59:02.2316056Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype161-Xq_Wq_dtypes161-4-size_mnk161-True] SKIPPED 2025-09-09T14:59:02.2317496Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype162-Xq_Wq_dtypes162-4-size_mnk162-False] SKIPPED 2025-09-09T14:59:02.2318924Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype163-Xq_Wq_dtypes163-4-size_mnk163-True] SKIPPED 2025-09-09T14:59:02.2320348Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype164-Xq_Wq_dtypes164-4-size_mnk164-False] SKIPPED 2025-09-09T14:59:02.2321842Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype165-Xq_Wq_dtypes165-4-size_mnk165-True] SKIPPED 2025-09-09T14:59:02.2323284Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype166-Xq_Wq_dtypes166-4-size_mnk166-False] SKIPPED 2025-09-09T14:59:02.2324717Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype167-Xq_Wq_dtypes167-4-size_mnk167-True] SKIPPED 2025-09-09T14:59:02.2326152Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype168-Xq_Wq_dtypes168-1-size_mnk168-False] SKIPPED 2025-09-09T14:59:02.2327590Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype169-Xq_Wq_dtypes169-1-size_mnk169-True] SKIPPED 2025-09-09T14:59:02.2329021Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype170-Xq_Wq_dtypes170-1-size_mnk170-False] SKIPPED 2025-09-09T14:59:02.2330554Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype171-Xq_Wq_dtypes171-1-size_mnk171-True] SKIPPED 2025-09-09T14:59:02.2332046Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype172-Xq_Wq_dtypes172-1-size_mnk172-False] SKIPPED 2025-09-09T14:59:02.2333469Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype173-Xq_Wq_dtypes173-1-size_mnk173-True] SKIPPED 2025-09-09T14:59:02.2334906Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype174-Xq_Wq_dtypes174-1-size_mnk174-False] SKIPPED 2025-09-09T14:59:02.2336386Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype175-Xq_Wq_dtypes175-1-size_mnk175-True] SKIPPED 2025-09-09T14:59:02.2337824Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype176-Xq_Wq_dtypes176-1-size_mnk176-False] SKIPPED 2025-09-09T14:59:02.2339263Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype177-Xq_Wq_dtypes177-1-size_mnk177-True] SKIPPED 2025-09-09T14:59:02.2340686Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype178-Xq_Wq_dtypes178-1-size_mnk178-False] SKIPPED 2025-09-09T14:59:02.2342119Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype179-Xq_Wq_dtypes179-1-size_mnk179-True] SKIPPED 2025-09-09T14:59:02.2343596Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype180-Xq_Wq_dtypes180-4-size_mnk180-False] SKIPPED 2025-09-09T14:59:02.2345026Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype181-Xq_Wq_dtypes181-4-size_mnk181-True] SKIPPED 2025-09-09T14:59:02.2346467Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype182-Xq_Wq_dtypes182-4-size_mnk182-False] SKIPPED 2025-09-09T14:59:02.2347899Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype183-Xq_Wq_dtypes183-4-size_mnk183-True] SKIPPED 2025-09-09T14:59:02.2349320Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype184-Xq_Wq_dtypes184-4-size_mnk184-False] SKIPPED 2025-09-09T14:59:02.2350748Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype185-Xq_Wq_dtypes185-4-size_mnk185-True] SKIPPED 2025-09-09T14:59:02.2352183Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype186-Xq_Wq_dtypes186-4-size_mnk186-False] SKIPPED 2025-09-09T14:59:02.2353641Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype187-Xq_Wq_dtypes187-4-size_mnk187-True] SKIPPED 2025-09-09T14:59:02.2355145Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype188-Xq_Wq_dtypes188-4-size_mnk188-False] SKIPPED 2025-09-09T14:59:02.2356581Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype189-Xq_Wq_dtypes189-4-size_mnk189-True] SKIPPED 2025-09-09T14:59:02.2358028Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype190-Xq_Wq_dtypes190-4-size_mnk190-False] SKIPPED 2025-09-09T14:59:02.2359465Z test/test_ops_rowwise_scaled_linear_sparse_cutlass.py::test_rowwise_scaled_linear_sparse_cutlass_f8f8[dtype191-Xq_Wq_dtypes191-4-size_mnk191-True] SKIPPED 2025-09-09T14:59:02.2360509Z test/test_utils.py::TestTorchVersion::test_torch_version_at_least PASSED 2025-09-09T14:59:02.2361250Z test/test_utils.py::TestTorchVersion::test_torch_version_deprecation PASSED 2025-09-09T14:59:02.2362022Z test/test_utils.py::TestTorchAOBaseTensor::test_default_impls SKIPPED 2025-09-09T14:59:02.2362843Z test/test_utils.py::TestTorchAOBaseTensor::test_default_impls_with_optional_attr SKIPPED 2025-09-09T14:59:02.2363729Z test/test_utils.py::TestTorchAOBaseTensor::test_default_impls_with_optional_data SKIPPED 2025-09-09T14:59:02.2364510Z test/test_utils.py::TestTorchAOBaseTensor::test_print_arg_types PASSED 2025-09-09T14:59:02.2364910Z 2025-09-09T14:59:02.2365156Z =============================== warnings summary =============================== 2025-09-09T14:59:02.2365738Z ../../opt/conda/envs/venv/lib/python3.9/site-packages/torch/__init__.py:1605 2025-09-09T14:59:02.2368629Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/__init__.py:1605: UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:80.) 2025-09-09T14:59:02.2371442Z _C._set_float32_matmul_precision(precision) 2025-09-09T14:59:02.2371681Z 2025-09-09T14:59:02.2371937Z test/core/test_config.py::test_reconstructable_dict_file_round_trip[config8] 2025-09-09T14:59:02.2373286Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/core/config.py:250: UserWarning: Stored version is not the same as current default version of the config: stored_version=2, current_default_version=1, please check the deprecation warning 2025-09-09T14:59:02.2374462Z warnings.warn( 2025-09-09T14:59:02.2374601Z 2025-09-09T14:59:02.2374794Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_bfloat16 2025-09-09T14:59:02.2375293Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_float16 2025-09-09T14:59:02.2375779Z test/dtypes/test_nf4.py::TestNF4Linear::test_to_copy_float32 2025-09-09T14:59:02.2377222Z /pytorch/ao/test/dtypes/test_nf4.py:223: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844. 2025-09-09T14:59:02.2378736Z torch.testing.assert_allclose(input_tensor, nf4_to_dtype, atol=0.13, rtol=0.13) 2025-09-09T14:59:02.2379122Z 2025-09-09T14:59:02.2379369Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype3] 2025-09-09T14:59:02.2379993Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype4] 2025-09-09T14:59:02.2380571Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype5] 2025-09-09T14:59:02.2381136Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype6] 2025-09-09T14:59:02.2381714Z test/float8/test_float8_utils.py::test_non_float32_input[invalid_dtype7] 2025-09-09T14:59:02.2382879Z /pytorch/ao/test/float8/test_float8_utils.py:67: DeprecationWarning: an integer is required (got type float). Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python. 2025-09-09T14:59:02.2384013Z non_float32_tensor = torch.tensor([3.0], dtype=invalid_dtype) 2025-09-09T14:59:02.2384309Z 2025-09-09T14:59:02.2384651Z test/integration/test_integration.py::SmoothquantIntegrationTest::test_on_dummy_distilbert 2025-09-09T14:59:02.2385653Z /pytorch/ao/test/integration/test_integration.py:1440: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2386434Z For migrations of users: 2025-09-09T14:59:02.2387243Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2388935Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2390299Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2391017Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2391490Z model_copy2 = torch.ao.quantization.quantize_dynamic( 2025-09-09T14:59:02.2391801Z 2025-09-09T14:59:02.2392141Z test/integration/test_integration.py::SmoothquantIntegrationTest::test_on_dummy_distilbert 2025-09-09T14:59:02.2393317Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/ao/quantization/quantize.py:566: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2394271Z For migrations of users: 2025-09-09T14:59:02.2395138Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2396693Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2398050Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2398776Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2399246Z convert(model, mapping, inplace=True) 2025-09-09T14:59:02.2399532Z 2025-09-09T14:59:02.2399836Z test/kernel/test_autotuner.py::TestQuantFlow::test_int_scaled_mm_1_cpu 2025-09-09T14:59:02.2400563Z test/kernel/test_autotuner.py::TestQuantFlow::test_int_scaled_mm_3_cpu 2025-09-09T14:59:02.2402094Z /pytorch/ao/test/kernel/test_autotuner.py:96: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844. 2025-09-09T14:59:02.2403523Z torch.testing.assert_allclose(out32_1, out32_2) 2025-09-09T14:59:02.2403793Z 2025-09-09T14:59:02.2404131Z test/prototype/test_codebook_quant.py::TestCodebookQuantization::test_choose_qparams_codebook 2025-09-09T14:59:02.2405612Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py:904: UserWarning: index_reduce() is in beta and the API may change at any time. (Triggered internally at /pytorch/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1517.) 2025-09-09T14:59:02.2406889Z return callable(*args, **kwargs) 2025-09-09T14:59:02.2407107Z 2025-09-09T14:59:02.2407358Z test/prototype/test_parametrization.py::TestFakeSparsity::test_jit_trace 2025-09-09T14:59:02.2409172Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/sparsity/sparsifier/utils.py:134: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! 2025-09-09T14:59:02.2410972Z assert self.mask.shape == x.shape 2025-09-09T14:59:02.2411198Z 2025-09-09T14:59:02.2411438Z test/prototype/test_scheduler.py::TestScheduler::test_lambda_scheduler 2025-09-09T14:59:02.2411998Z test/prototype/test_scheduler.py::TestCubicScheduler::test_step 2025-09-09T14:59:02.2429275Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/sparsity/scheduler/base_scheduler.py:133: UserWarning: Detected call of `scheduler.step()` before `sparsifier.step()`. You have to make sure you run the sparsifier.step() BEFORE any calls to the scheduler.step(). 2025-09-09T14:59:02.2430756Z warnings.warn( 2025-09-09T14:59:02.2430907Z 2025-09-09T14:59:02.2431453Z test/prototype/test_structured_sparsifier.py::TestBaseStructuredSparsifier::test_complex_conv2d 2025-09-09T14:59:02.2432810Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/prototype/sparsity/pruner/prune_functions.py:347: UserWarning: Converting a tensor with requires_grad=True to a scalar may lead to unexpected behavior. 2025-09-09T14:59:02.2434325Z Consider using tensor.detach() first. (Triggered internally at /pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:836.) 2025-09-09T14:59:02.2435236Z flattened_pruned_biases = torch.tensor( 2025-09-09T14:59:02.2435468Z 2025-09-09T14:59:02.2435759Z test/quantization/pt2e/test_graph_utils.py::TestGraphUtils::test_conv_bn_conv_relu 2025-09-09T14:59:02.2437257Z /pytorch/ao/test/quantization/pt2e/test_graph_utils.py:42: FutureWarning: export(f, *args, **kwargs) is deprecated, use export(f)(*args, **kwargs) instead. If you don't migrate, we may break your export call in the future if your user defined kwargs conflict with future kwargs added to export(f). 2025-09-09T14:59:02.2438750Z m, guards = torchdynamo.export( # noqa: F841© 2025-09-09T14:59:02.2439000Z 2025-09-09T14:59:02.2439267Z test/quantization/pt2e/test_graph_utils.py::TestGraphUtils::test_conv_bn_relu 2025-09-09T14:59:02.2440740Z /pytorch/ao/test/quantization/pt2e/test_graph_utils.py:86: FutureWarning: export(f, *args, **kwargs) is deprecated, use export(f)(*args, **kwargs) instead. If you don't migrate, we may break your export call in the future if your user defined kwargs conflict with future kwargs added to export(f). 2025-09-09T14:59:02.2442105Z m, guards = torchdynamo.export( # noqa: F841 2025-09-09T14:59:02.2442363Z 2025-09-09T14:59:02.2442713Z test/quantization/pt2e/test_graph_utils.py::TestGraphUtils::test_customized_equivalet_types_dict 2025-09-09T14:59:02.2444269Z /pytorch/ao/test/quantization/pt2e/test_graph_utils.py:118: FutureWarning: export(f, *args, **kwargs) is deprecated, use export(f)(*args, **kwargs) instead. If you don't migrate, we may break your export call in the future if your user defined kwargs conflict with future kwargs added to export(f). 2025-09-09T14:59:02.2445581Z m, guards = torchdynamo.export( # noqa: F841 2025-09-09T14:59:02.2445836Z 2025-09-09T14:59:02.2446018Z test/quantization/pt2e/test_quantize_pt2e.py: 18 warnings 2025-09-09T14:59:02.2446509Z test/quantization/pt2e/test_quantize_pt2e_qat.py: 75 warnings 2025-09-09T14:59:02.2446996Z test/quantization/pt2e/test_representation.py: 8 warnings 2025-09-09T14:59:02.2447869Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/testing/pt2e/_xnnpack_quantizer.py:289: UserWarning: XNNPACKQuantizer is deprecated! 2025-09-09T14:59:02.2448779Z warnings.warn(f"{self.__class__.__name__} is deprecated!") 2025-09-09T14:59:02.2449080Z 2025-09-09T14:59:02.2449447Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_allow_exported_model_train_eval 2025-09-09T14:59:02.2450243Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_disallow_eval_train 2025-09-09T14:59:02.2451073Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_annotate_mul_tensor 2025-09-09T14:59:02.2452017Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_annotate_mul_tensor 2025-09-09T14:59:02.2452942Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_annotate_mul_tensor 2025-09-09T14:59:02.2453884Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_conv2d_recipe 2025-09-09T14:59:02.2454826Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_linear_recipe 2025-09-09T14:59:02.2456107Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/quantize_pt2e.py:305: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2457100Z For migrations of users: 2025-09-09T14:59:02.2457927Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2459486Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2460853Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2461609Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2462179Z return torch_convert_pt2e(model, use_reference_representation, fold_quantize) 2025-09-09T14:59:02.2462549Z 2025-09-09T14:59:02.2462734Z test/quantization/pt2e/test_quantize_pt2e.py: 192 warnings 2025-09-09T14:59:02.2463233Z test/quantization/pt2e/test_quantize_pt2e_qat.py: 252 warnings 2025-09-09T14:59:02.2464629Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/ao/quantization/pt2e/utils.py:359: FutureWarning: `torch.export.export_for_training` is deprecated and will be removed in PyTorch 2.10. Please use `torch.export.export` instead, which is functionally equivalent. 2025-09-09T14:59:02.2465953Z aten_pattern = torch.export.export_for_training( 2025-09-09T14:59:02.2466225Z 2025-09-09T14:59:02.2466606Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_embedding_conv_linear_quantization 2025-09-09T14:59:02.2467405Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_embedding_quantizer 2025-09-09T14:59:02.2468579Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/testing/pt2e/utils.py:108: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2469515Z For migrations of users: 2025-09-09T14:59:02.2470311Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2471870Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2473221Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2473946Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2474350Z m_fx = prepare_fx( 2025-09-09T14:59:02.2474572Z 2025-09-09T14:59:02.2474882Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_model_is_exported 2025-09-09T14:59:02.2476416Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py:922: UserWarning: Was not able to add assertion to guarantee correct input x to specialized function. It is up to the user to make sure that your inputs match the inputs you specialized the function with. 2025-09-09T14:59:02.2477693Z warnings.warn( 2025-09-09T14:59:02.2477834Z 2025-09-09T14:59:02.2478109Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_reentrant 2025-09-09T14:59:02.2478905Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_fold_bn_erases_bn_node 2025-09-09T14:59:02.2479809Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_fold_bn_erases_bn_node 2025-09-09T14:59:02.2481069Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/utils.py:145: UserWarning: must run observer before calling calculate_qparams. Returning default values. 2025-09-09T14:59:02.2482026Z warnings.warn( 2025-09-09T14:59:02.2482169Z 2025-09-09T14:59:02.2482442Z test/quantization/pt2e/test_quantize_pt2e.py::TestQuantizePT2E::test_reentrant 2025-09-09T14:59:02.2483752Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/observer.py:1350: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point 2025-09-09T14:59:02.2484832Z warnings.warn( 2025-09-09T14:59:02.2484971Z 2025-09-09T14:59:02.2485410Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_bias_derived_qspec 2025-09-09T14:59:02.2486434Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_conv_bn_per_channel_weight_bias 2025-09-09T14:59:02.2487451Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn1d::test_qat_per_channel_weight_custom_dtype 2025-09-09T14:59:02.2488493Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_bias_derived_qspec 2025-09-09T14:59:02.2489502Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_conv_bn_per_channel_weight_bias 2025-09-09T14:59:02.2490523Z test/quantization/pt2e/test_quantize_pt2e_qat.py::TestQuantizePT2EQAT_ConvBn2d::test_qat_per_channel_weight_custom_dtype 2025-09-09T14:59:02.2492091Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/observer.py:253: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. 2025-09-09T14:59:02.2493262Z warnings.warn( 2025-09-09T14:59:02.2493415Z 2025-09-09T14:59:02.2493610Z test/quantization/pt2e/test_quantize_pt2e_qat.py: 40 warnings 2025-09-09T14:59:02.2494561Z /pytorch/ao/test/quantization/pt2e/test_quantize_pt2e_qat.py:165: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2495381Z For migrations of users: 2025-09-09T14:59:02.2496192Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2497743Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2499104Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2499840Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2500243Z model_fx = prepare_qat_fx( 2025-09-09T14:59:02.2500438Z 2025-09-09T14:59:02.2500631Z test/quantization/pt2e/test_quantize_pt2e_qat.py: 40 warnings 2025-09-09T14:59:02.2501682Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/ao/quantization/fx/prepare.py:464: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:02.2502682Z For migrations of users: 2025-09-09T14:59:02.2503489Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:02.2505025Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:02.2506381Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:02.2507109Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:02.2507668Z convert(root, mapping=module_to_qat_module, inplace=True, remove_qconfig=False) 2025-09-09T14:59:02.2508044Z 2025-09-09T14:59:02.2508365Z test/quantization/pt2e/test_x86inductor_fusion.py::TestPatternMatcher::test_qconv2d_add_3 2025-09-09T14:59:02.2509217Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_filter_conv2d_recipe 2025-09-09T14:59:22.2771115Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/quantizer/x86_inductor_quantizer.py:1325: UserWarning: The input of maxpool2d is not quantized, skip annotate maxpool2d with config QuantizationConfig(input_activation=QuantizationSpec(dtype=torch.uint8, observer_or_fake_quant_ctr=functools.partial(, eps=0.000244140625){}, quant_min=0, quant_max=255, qscheme=torch.per_tensor_affine, ch_axis=None, is_dynamic=False), output_activation=QuantizationSpec(dtype=torch.uint8, observer_or_fake_quant_ctr=functools.partial(, eps=0.000244140625){}, quant_min=0, quant_max=255, qscheme=torch.per_tensor_affine, ch_axis=None, is_dynamic=False), weight=QuantizationSpec(dtype=torch.int8, observer_or_fake_quant_ctr=functools.partial(, eps=0.000244140625){}, quant_min=-128, quant_max=127, qscheme=torch.per_channel_symmetric, ch_axis=0, is_dynamic=False), bias=None, is_qat=False). 2025-09-09T14:59:22.2776188Z warnings.warn( 2025-09-09T14:59:22.2776331Z 2025-09-09T14:59:22.2776748Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_q_attention_block 2025-09-09T14:59:22.2777612Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_q_attention_block 2025-09-09T14:59:22.2778580Z test/quantization/pt2e/test_x86inductor_fusion.py::TestDynamicPatternMatcher::test_qconv2d_maxpool2d_linear_dynamic_cpu 2025-09-09T14:59:22.2780337Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/_inductor/mkldnn_lowerings.py:736: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than torch.tensor(sourceTensor). 2025-09-09T14:59:22.2781778Z torch.tensor(w_zp_tensor, dtype=torch.int32), name=w_zp.get_name() 2025-09-09T14:59:22.2782108Z 2025-09-09T14:59:22.2782655Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_and_module_type_with_mixed_configs 2025-09-09T14:59:22.2784139Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/quantizer/x86_inductor_quantizer.py:484: UserWarning: Mixed dynamic and static quantization config is not supported. 2025-09-09T14:59:22.2785153Z warnings.warn( 2025-09-09T14:59:22.2785293Z 2025-09-09T14:59:22.2785854Z test/quantization/pt2e/test_x86inductor_quantizer.py::TestQuantizePT2EX86Inductor::test_set_module_name_and_module_type_with_mixed_configs 2025-09-09T14:59:22.2787385Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/pt2e/quantizer/x86_inductor_quantizer.py:383: UserWarning: Skip the quantization config for . 2025-09-09T14:59:22.2788499Z warnings.warn( 2025-09-09T14:59:22.2788636Z 2025-09-09T14:59:22.2788874Z test/quantization/test_qat.py::TestQAT::test_legacy_quantize_api_e2e 2025-09-09T14:59:22.2790187Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/qat/utils.py:84: UserWarning: 'IntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead: 2025-09-09T14:59:22.2791356Z 2025-09-09T14:59:22.2791682Z base_config = Int8DynamicActivationInt4WeightConfig(group_size=32) 2025-09-09T14:59:22.2792207Z quantize_(model, QATConfig(base_config, step="prepare")) 2025-09-09T14:59:22.2792593Z # train (not shown) 2025-09-09T14:59:22.2792941Z quantize_(model, QATConfig(base_config, step="convert")) 2025-09-09T14:59:22.2793309Z 2025-09-09T14:59:22.2793621Z Alternatively, if you prefer to pass in fake quantization configs: 2025-09-09T14:59:22.2794029Z 2025-09-09T14:59:22.2794435Z activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False) 2025-09-09T14:59:22.2795165Z weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32) 2025-09-09T14:59:22.2795594Z qat_config = QATConfig( 2025-09-09T14:59:22.2795968Z activation_config=activation_config, 2025-09-09T14:59:22.2796299Z weight_config=weight_config, 2025-09-09T14:59:22.2796609Z step="prepare", 2025-09-09T14:59:22.2796848Z ) 2025-09-09T14:59:22.2797071Z quantize_(model, qat_config) 2025-09-09T14:59:22.2797336Z 2025-09-09T14:59:22.2797687Z Please see https://github.com/pytorch/ao/issues/2630 for more details. 2025-09-09T14:59:22.2798102Z 2025-09-09T14:59:22.2798316Z warnings.warn( 2025-09-09T14:59:22.2798458Z 2025-09-09T14:59:22.2798722Z test/quantization/test_qat.py::TestQAT::test_legacy_quantize_api_e2e 2025-09-09T14:59:22.2800085Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/quantization/qat/utils.py:84: UserWarning: 'FromIntXQuantizationAwareTrainingConfig' is deprecated and will be removed in a future release. Please use the following API instead: 2025-09-09T14:59:22.2801285Z 2025-09-09T14:59:22.2801614Z base_config = Int8DynamicActivationInt4WeightConfig(group_size=32) 2025-09-09T14:59:22.2802144Z quantize_(model, QATConfig(base_config, step="prepare")) 2025-09-09T14:59:22.2802522Z # train (not shown) 2025-09-09T14:59:22.2802865Z quantize_(model, QATConfig(base_config, step="convert")) 2025-09-09T14:59:22.2803219Z 2025-09-09T14:59:22.2803542Z Alternatively, if you prefer to pass in fake quantization configs: 2025-09-09T14:59:22.2803946Z 2025-09-09T14:59:22.2804349Z activation_config = IntxFakeQuantizeConfig(torch.int8, "per_token", is_symmetric=False) 2025-09-09T14:59:22.2804989Z weight_config = IntxFakeQuantizeConfig(torch.int4, group_size=32) 2025-09-09T14:59:22.2805439Z qat_config = QATConfig( 2025-09-09T14:59:22.2805746Z activation_config=activation_config, 2025-09-09T14:59:22.2806070Z weight_config=weight_config, 2025-09-09T14:59:22.2806373Z step="prepare", 2025-09-09T14:59:22.2806604Z ) 2025-09-09T14:59:22.2806819Z quantize_(model, qat_config) 2025-09-09T14:59:22.2807092Z 2025-09-09T14:59:22.2807410Z Please see https://github.com/pytorch/ao/issues/2630 for more details. 2025-09-09T14:59:22.2807830Z 2025-09-09T14:59:22.2808026Z warnings.warn( 2025-09-09T14:59:22.2808164Z 2025-09-09T14:59:22.2808391Z test/quantization/test_qat.py::TestQAT::test_qat_fp8a4w_quantizer 2025-09-09T14:59:22.2812253Z /opt/conda/envs/venv/lib/python3.9/site-packages/torch/autograd/graph.py:841: UserWarning: torchao::dequantize_affine_float8: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at /pytorch/torch/csrc/autograd/autograd_not_implemented_fallback.cpp:62.) 2025-09-09T14:59:22.2816051Z return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2025-09-09T14:59:22.2816514Z 2025-09-09T14:59:22.2816767Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_one_layer_mlp_2x4 2025-09-09T14:59:22.2817859Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/sparsity/wanda.py:46: UserWarning: WandaSparsifier got semi_structured_bock_size=4, sparsity_level fixed to 50% (2:4) sparsity 2025-09-09T14:59:22.2818806Z warnings.warn( 2025-09-09T14:59:22.2818947Z 2025-09-09T14:59:22.2819187Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_one_layer_mlp_2x4 2025-09-09T14:59:22.2819901Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_one_layer_mlp_unstructured 2025-09-09T14:59:22.2820499Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_prepare 2025-09-09T14:59:22.2821079Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_squash_mask 2025-09-09T14:59:22.2821697Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_two_layer_mlp_unstructured 2025-09-09T14:59:22.2822426Z test/sparsity/test_wanda.py::TestWandaSparsifier::test_two_layer_mlp_unstructured_custom_config 2025-09-09T14:59:22.2823616Z /opt/conda/envs/venv/lib/python3.9/site-packages/torchao/sparsity/wanda.py:75: DeprecationWarning: torch.ao.quantization is deprecated and will be removed in 2.10. 2025-09-09T14:59:22.2824505Z For migrations of users: 2025-09-09T14:59:22.2825360Z 1. Eager mode quantization (torch.ao.quantization.quantize, torch.ao.quantization.quantize_dynamic), please migrate to use torchao eager mode quantize_ API instead 2025-09-09T14:59:22.2826921Z 2. FX graph mode quantization (torch.ao.quantization.quantize_fx.prepare_fx,torch.ao.quantization.quantize_fx.convert_fx, please migrate to use torchao pt2e quantization API instead (prepare_pt2e, convert_pt2e) 2025-09-09T14:59:22.2828269Z 3. pt2e quantization has been migrated to torchao (https://github.com/pytorch/ao/tree/main/torchao/quantization/pt2e) 2025-09-09T14:59:22.2829010Z see https://github.com/pytorch/ao/issues/2259 for more details 2025-09-09T14:59:22.2829487Z torch.ao.quantization.prepare(model, inplace=True) 2025-09-09T14:59:22.2829757Z 2025-09-09T14:59:22.2829982Z -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 2025-09-09T14:59:22.2831006Z ======== 1408 passed, 5569 skipped, 684 warnings in 3050.12s (0:50:50) ========= 2025-09-09T14:59:22.2890987Z ##[group]Run pmeier/pytest-results-action@a2c1430e2bddadbad9f49a6f9b879f062c6b19b1 2025-09-09T14:59:22.2891511Z with: 2025-09-09T14:59:22.2891806Z path: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:22.2892212Z fail-on-empty: false 2025-09-09T14:59:22.2892441Z env: 2025-09-09T14:59:22.2892684Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:22.2893024Z REPOSITORY: pytorch/ao 2025-09-09T14:59:22.2893268Z PR_NUMBER: 2963 2025-09-09T14:59:22.2894777Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:22.2896487Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:22.2897126Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:22.2897745Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:22.2898139Z ##[endgroup] 2025-09-09T14:59:22.3713541Z Prepare all required actions 2025-09-09T14:59:22.3754905Z ##[group]Run ./test-infra/.github/actions/chown-directory 2025-09-09T14:59:22.3755267Z with: 2025-09-09T14:59:22.3755557Z directory: /home/ec2-user/actions-runner/_work/ao/ao/ 2025-09-09T14:59:22.3756057Z ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine 2025-09-09T14:59:22.3756471Z env: 2025-09-09T14:59:22.3756714Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:22.3757043Z REPOSITORY: pytorch/ao 2025-09-09T14:59:22.3757301Z PR_NUMBER: 2963 2025-09-09T14:59:22.3758786Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:22.3760543Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:22.3761132Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:22.3761697Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:22.3762094Z ##[endgroup] 2025-09-09T14:59:22.3787869Z ##[group]Run docker run --rm -v "${DIRECTORY}":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" . 2025-09-09T14:59:22.3788582Z docker run --rm -v "${DIRECTORY}":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" . 2025-09-09T14:59:22.3809836Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:59:22.3810424Z env: 2025-09-09T14:59:22.3810674Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:22.3811119Z REPOSITORY: pytorch/ao 2025-09-09T14:59:22.3811378Z PR_NUMBER: 2963 2025-09-09T14:59:22.3812885Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:22.3814577Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:22.3815179Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:22.3815731Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:22.3816259Z ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine 2025-09-09T14:59:22.3816750Z DIRECTORY: /home/ec2-user/actions-runner/_work/ao/ao/ 2025-09-09T14:59:22.3817111Z ##[endgroup] 2025-09-09T14:59:22.4034215Z Unable to find image '308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine:latest' locally 2025-09-09T14:59:22.6430811Z latest: Pulling from tool/alpine 2025-09-09T14:59:22.6431313Z 540db60ca938: Pulling fs layer 2025-09-09T14:59:22.7487446Z 540db60ca938: Download complete 2025-09-09T14:59:22.8350422Z 540db60ca938: Pull complete 2025-09-09T14:59:22.8475007Z Digest: sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 2025-09-09T14:59:22.8522408Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine:latest 2025-09-09T14:59:24.4941295Z Prepare all required actions 2025-09-09T14:59:24.4969037Z ##[group]Run ./test-infra/.github/actions/chown-directory 2025-09-09T14:59:24.4969410Z with: 2025-09-09T14:59:24.4969684Z directory: /home/ec2-user/actions-runner/_work/_temp 2025-09-09T14:59:24.4970186Z ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine 2025-09-09T14:59:24.4970647Z env: 2025-09-09T14:59:24.4970915Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:24.4971266Z REPOSITORY: pytorch/ao 2025-09-09T14:59:24.4971606Z PR_NUMBER: 2963 2025-09-09T14:59:24.4973112Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:24.4974853Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:24.4975438Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:24.4976003Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:24.4976384Z ##[endgroup] 2025-09-09T14:59:24.5001800Z ##[group]Run docker run --rm -v "${DIRECTORY}":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" . 2025-09-09T14:59:24.5002520Z docker run --rm -v "${DIRECTORY}":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" . 2025-09-09T14:59:24.5012699Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:59:24.5013086Z env: 2025-09-09T14:59:24.5013330Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:24.5013680Z REPOSITORY: pytorch/ao 2025-09-09T14:59:24.5013932Z PR_NUMBER: 2963 2025-09-09T14:59:24.5015447Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:24.5017133Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:24.5017719Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:24.5018366Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:24.5018905Z ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine 2025-09-09T14:59:24.5019396Z DIRECTORY: /home/ec2-user/actions-runner/_work/_temp 2025-09-09T14:59:24.5019757Z ##[endgroup] 2025-09-09T14:59:25.6066697Z ##[group]Run # Only do these steps if we actually want to upload an artifact 2025-09-09T14:59:25.6067340Z # Only do these steps if we actually want to upload an artifact 2025-09-09T14:59:25.6067799Z if [[ -n "${UPLOAD_ARTIFACT_NAME}" ]]; then 2025-09-09T14:59:25.6068354Z  # If the default execution path is followed then we should get a wheel in the dist/ folder 2025-09-09T14:59:25.6068979Z  # attempt to just grab whatever is in there and scoop it all up 2025-09-09T14:59:25.6069489Z  if find "dist/" -name "*.whl" >/dev/null 2>/dev/null; then 2025-09-09T14:59:25.6069931Z  mv -v dist/*.whl "${RUNNER_ARTIFACT_DIR}/" 2025-09-09T14:59:25.6070276Z  fi 2025-09-09T14:59:25.6070560Z  if [[ -d "artifacts-to-be-uploaded" ]]; then 2025-09-09T14:59:25.6071003Z  mv -v artifacts-to-be-uploaded/* "${RUNNER_ARTIFACT_DIR}/" 2025-09-09T14:59:25.6071407Z  fi 2025-09-09T14:59:25.6071654Z fi 2025-09-09T14:59:25.6071850Z  2025-09-09T14:59:25.6072064Z upload_docs=0 2025-09-09T14:59:25.6072455Z # Check if there are files in the documentation folder to upload, note that 2025-09-09T14:59:25.6072929Z # empty folders do not count 2025-09-09T14:59:25.6073396Z if find "${RUNNER_DOCS_DIR}" -mindepth 1 -maxdepth 1 -type f | read -r; then 2025-09-09T14:59:25.6074005Z  # TODO: Add a check here to test if on ec2 because if we're not on ec2 then this 2025-09-09T14:59:25.6074599Z  # upload will probably not work correctly 2025-09-09T14:59:25.6074945Z  upload_docs=1 2025-09-09T14:59:25.6075211Z fi 2025-09-09T14:59:25.6075517Z echo "upload-docs=${upload_docs}" >> "${GITHUB_OUTPUT}" 2025-09-09T14:59:25.6082777Z shell: /usr/bin/bash -e {0} 2025-09-09T14:59:25.6083065Z env: 2025-09-09T14:59:25.6083310Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:25.6083669Z REPOSITORY: pytorch/ao 2025-09-09T14:59:25.6083919Z PR_NUMBER: 2963 2025-09-09T14:59:25.6085439Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:25.6087141Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:25.6087733Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:25.6088301Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:25.6088707Z UPLOAD_ARTIFACT_NAME: 2025-09-09T14:59:25.6088958Z ##[endgroup] 2025-09-09T14:59:25.6218787Z Prepare all required actions 2025-09-09T14:59:25.6257675Z ##[group]Run ./test-infra/.github/actions/teardown-linux 2025-09-09T14:59:25.6258027Z with: 2025-09-09T14:59:25.6258234Z env: 2025-09-09T14:59:25.6258468Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:25.6258812Z REPOSITORY: pytorch/ao 2025-09-09T14:59:25.6259058Z PR_NUMBER: 2963 2025-09-09T14:59:25.6260559Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:25.6262252Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:25.6262942Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:25.6263551Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:25.6263933Z ##[endgroup] 2025-09-09T14:59:25.6291132Z ##[group]Run set -eou pipefail 2025-09-09T14:59:25.6291471Z set -eou pipefail 2025-09-09T14:59:25.6291728Z  2025-09-09T14:59:25.6292105Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2025-09-09T14:59:25.6292582Z for _ in $(seq 1440); do 2025-09-09T14:59:25.6292906Z  # Break if no ssh session exists anymore 2025-09-09T14:59:25.6293265Z  if [ "$(who)" = "" ]; then 2025-09-09T14:59:25.6293548Z  break 2025-09-09T14:59:25.6293779Z  fi 2025-09-09T14:59:25.6294010Z  echo "." 2025-09-09T14:59:25.6294239Z  sleep 5 2025-09-09T14:59:25.6294476Z done 2025-09-09T14:59:25.6300140Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:59:25.6300535Z env: 2025-09-09T14:59:25.6300781Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:25.6301129Z REPOSITORY: pytorch/ao 2025-09-09T14:59:25.6301376Z PR_NUMBER: 2963 2025-09-09T14:59:25.6302884Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:25.6304599Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:25.6305193Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:25.6305764Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:25.6306152Z ##[endgroup] 2025-09-09T14:59:25.6331038Z Holding runner for 2 hours until all ssh sessions have logged out 2025-09-09T14:59:25.6416479Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-09-09T14:59:25.6417066Z # ignore expansion of "docker ps -q" since it could be empty 2025-09-09T14:59:25.6417519Z # shellcheck disable=SC2046 2025-09-09T14:59:25.6417855Z docker stop $(docker ps -q) || true 2025-09-09T14:59:25.6418201Z # Prune all of the docker images 2025-09-09T14:59:25.6418517Z docker system prune -af 2025-09-09T14:59:25.6424147Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:59:25.6424514Z env: 2025-09-09T14:59:25.6424768Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:25.6425097Z REPOSITORY: pytorch/ao 2025-09-09T14:59:25.6425361Z PR_NUMBER: 2963 2025-09-09T14:59:25.6427018Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:25.6428725Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:25.6429330Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:25.6429879Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:25.6430274Z ##[endgroup] 2025-09-09T14:59:26.9455360Z f3a755ba68cb 2025-09-09T14:59:30.3228662Z Deleted Containers: 2025-09-09T14:59:30.3229104Z f3a755ba68cb7be3ae6465e0287a3d53dc5126ae70ee1cbee8ed8517704cf634 2025-09-09T14:59:30.3229451Z 2025-09-09T14:59:34.9538463Z Deleted Images: 2025-09-09T14:59:34.9539028Z untagged: pytorch/almalinux-builder:cpu 2025-09-09T14:59:34.9539702Z untagged: pytorch/almalinux-builder@sha256:10f309602e8cd84e21cb6970f97544761dd12a06b141583ab4d45f0bac4bf651 2025-09-09T14:59:34.9540686Z deleted: sha256:d6a8fef7076378a67f34a587132b48533aeb29b267a5d532b5b9c8df70af258b 2025-09-09T14:59:34.9541361Z deleted: sha256:5ee80ac5eaac1f2e1a07ecf3b3488008351b9350af841eed478e2e8c24e6f42a 2025-09-09T14:59:34.9542016Z deleted: sha256:a65598dc7a77543b8c2087c984c4d399c538c793064f336291e43cd23c0d4bee 2025-09-09T14:59:34.9542672Z deleted: sha256:75bba60f865bdfb654effb55beba5e38d571601662e689a4eb428757bfbd966d 2025-09-09T14:59:34.9543316Z deleted: sha256:b970969a082500ab27d2bf9eac213044fc772525f683f5fc7332989e30c76480 2025-09-09T14:59:34.9543961Z deleted: sha256:bc559781ad080de9f6860d476855c5af704239cf63d44c57086a59b50d27e62d 2025-09-09T14:59:34.9544601Z deleted: sha256:cc6a3c301e1c09d37986dc9f00ef5acee60b16b3beb546ba626465df575ddc6f 2025-09-09T14:59:34.9545269Z deleted: sha256:bfff11b1687c8218c22ee1e3b72bf01d75b62571b1328a8d0fba8d430ad5f2e5 2025-09-09T14:59:34.9545915Z deleted: sha256:da2b845a29eb0c1156d11b959ebf0922384d243241d1b71e5ada57fe43f8d31b 2025-09-09T14:59:34.9546575Z deleted: sha256:36f4610db00e3bc6f0c36306b6e3ae9c953b4135dbbab9705bb1040a1c1a428a 2025-09-09T14:59:34.9547243Z deleted: sha256:3786e3bfaf2a9ae97394f7b5f00680409659a2da27de80d81bedf3f024d96905 2025-09-09T14:59:34.9547878Z deleted: sha256:a1f8b206db55ff3248ff449e3d374688489e9e78184454cb68bd621b1eaae1ef 2025-09-09T14:59:34.9548517Z deleted: sha256:1a446cf0c102c19364757b664cb9b87039da4e5b37717769d86d0de9dea0bcbd 2025-09-09T14:59:34.9549154Z deleted: sha256:0ef2a1cbee577b4f78bc15b15d7f0806734a6999f0a40e87fffc92b7e9e58fad 2025-09-09T14:59:34.9549810Z deleted: sha256:130b9be4ea5bde75920a4d5ffb16053799c94b8a93712f043a0cb564a247a775 2025-09-09T14:59:34.9550464Z deleted: sha256:7ee0ef5f99efb59425750d99644df2c4890820ad3b993fc67c7d223f4ac0d032 2025-09-09T14:59:34.9551108Z deleted: sha256:b2137bde5bf6be21e1746d6ea0c8cab11f4f58745e132c5a2855c4bccc2c1ce3 2025-09-09T14:59:34.9551763Z deleted: sha256:f7827189db61f670718a2b94e71e48195c0baf64ee4f80458bc5fb383510d43f 2025-09-09T14:59:34.9552387Z deleted: sha256:ff4f19608a1944c0c2807cd533515673285a9632dc74bf020e83e18630d1ae35 2025-09-09T14:59:34.9553097Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine:latest 2025-09-09T14:59:34.9553971Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine@sha256:def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748 2025-09-09T14:59:34.9554966Z deleted: sha256:6dbb9cc54074106d46d4ccb330f2a40a682d49dda5f4844962b7dce9fe44aaec 2025-09-09T14:59:34.9555642Z deleted: sha256:b2d5eeeaba3a22b9b8aa97261957974a6bd65274ebd43e1d81d0a7b8b752b116 2025-09-09T14:59:34.9556035Z 2025-09-09T14:59:34.9572628Z Total reclaimed space: 7.099GB 2025-09-09T14:59:34.9623700Z ##[group]Run set +e 2025-09-09T14:59:34.9623976Z set +e 2025-09-09T14:59:34.9624235Z if [[ "${NO_SUDO}" == "false" ]]; then 2025-09-09T14:59:34.9624643Z  sudo rm -rf "${GITHUB_WORKSPACE:?}/${REPOSITORY:?}" 2025-09-09T14:59:34.9624999Z else 2025-09-09T14:59:34.9625281Z  rm -rf "${GITHUB_WORKSPACE:?}/${REPOSITORY:?}" 2025-09-09T14:59:34.9625646Z fi 2025-09-09T14:59:34.9625868Z set -e 2025-09-09T14:59:34.9631396Z shell: /usr/bin/bash -e {0} 2025-09-09T14:59:34.9631670Z env: 2025-09-09T14:59:34.9631917Z DOCKER_IMAGE: pytorch/almalinux-builder:cpu 2025-09-09T14:59:34.9632263Z REPOSITORY: pytorch/ao 2025-09-09T14:59:34.9632510Z PR_NUMBER: 2963 2025-09-09T14:59:34.9634062Z SCRIPT: conda create -n venv python=3.9 -y conda activate venv python -m pip install --upgrade pip pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r dev-requirements.txt pip install . export CONDA=$(dirname $(dirname $(which conda))) export LD_LIBRARY_PATH=$CONDA/lib/:$LD_LIBRARY_PATH pytest test --verbose -s 2025-09-09T14:59:34.9635835Z RUNNER_ARTIFACT_DIR: /home/ec2-user/actions-runner/_work/_temp/artifacts 2025-09-09T14:59:34.9636445Z RUNNER_TEST_RESULTS_DIR: /home/ec2-user/actions-runner/_work/_temp/test-results 2025-09-09T14:59:34.9637107Z RUNNER_DOCS_DIR: /home/ec2-user/actions-runner/_work/_temp/docs 2025-09-09T14:59:34.9637495Z NO_SUDO: false 2025-09-09T14:59:34.9637723Z ##[endgroup] 2025-09-09T14:59:35.4810976Z Post job cleanup. 2025-09-09T14:59:35.6122420Z Post job cleanup. 2025-09-09T14:59:35.7055267Z [command]/usr/bin/git version 2025-09-09T14:59:35.7111012Z git version 2.47.1 2025-09-09T14:59:35.7158450Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/c919f4ec-894f-410c-a61c-6c8f3d6a983b' before making global git config changes 2025-09-09T14:59:35.7159542Z Adding repository directory to the temporary git global config as a safe directory 2025-09-09T14:59:35.7165111Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/ao/ao/test-infra 2025-09-09T14:59:35.7203587Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-09-09T14:59:35.7237655Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-09-09T14:59:35.7801624Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-09-09T14:59:35.7832448Z http.https://github.com/.extraheader 2025-09-09T14:59:35.7841823Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-09-09T14:59:35.7868448Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-09-09T14:59:35.8248400Z A job completed hook has been configured by the self-hosted runner administrator 2025-09-09T14:59:35.8276001Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2025-09-09T14:59:35.8281364Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-09T14:59:35.8281766Z ##[endgroup] 2025-09-09T14:59:35.8437091Z [!ALERT!] Swap in detected! [!ALERT!] 2025-09-09T14:59:46.7683635Z [!ALERT!] Swap out detected [!ALERT!] 2025-09-09T15:00:05.7651952Z Cleaning up orphan processes